Preview only show first 10 pages with watermark. For full document please download

Slides - My E-town

   EMBED


Share

Transcript

Week 14 - Friday    What did we talk about last time? Non-photorealistic rendering Cel shading  Discrete coloring  Silhouetting  The 1980s saw the first dedicated chips for accelerating PC graphics  They specialized in 2D operations  Blitting: making a BLock Image Transfer, essentially copying rectangular chunks of memory  The early 90s saw many chips with specialized 2D functions to accelerate MS Windows  MS developed APIs just for this The mid 90s saw the rise of separate 3D accelerators like the Voodoo2  By the late 90s and early 2000s, the trend was back toward integrating 2D, 3D, and video processing capabilities into one card  Programmable shading arrived with the GeForce3 going to the mid 2000s  Since then, shading programming has gotten more flexible  Stream models allow GPUs to be used for general purpose computation at a massively parallel scale on a single chip   3Dfx Interactive was a huge developer in the field until they went bankrupt in 2002  Bought by Nvidia    Nvidia is a current manufacturer, famous for its GeForce gaming line, Quadro professional products, and Tesla GPGPU processors AMD bought ATI in 2006 and manufactures the Radeon line of gaming processors, FirePro professional products, and FireStream GPGPU processors as well as general purpose CPUs Intel is also a major player in GPUs but focuses on budget models Moore's Law has fallen off, but semiconductor features continue to get smaller  Packing more gates into the same size allows for faster speeds and greater parallelism but more heat  Shrinking sizes still drive GPU performance    AMD Accelerated Processing Units are a combined single-chip CPU and GPU They provide generally lower cost alternatives to a separate CPU and GPU  Some floating point operations can be offloaded to the GPU during normal computing   Designed mostly for lower-power, lowerperformance hardware likes tablets and laptops AMD just released this Polaris architecture using the incredibly small FinFET 14nm process  AMD is hyping its LiquidVR technology, an API focused on Oculus, Vive, Sulon, and similar VR headsets    Tearing still happens Apparently some peopled are annoyed by it AMD has technology that, if used with compatible devices, dynamically changes monitor refresh  Improved anisotropic filtering  Same algorithm, but "better" weights applied to samples  Less shimmer   Improved DirectX 11 tessellation Lower power consumption  Detects short and long idle states and throttles back GPU  Eyefinity is their multi-screen technology        Up to 6 monitors CrossFire is their technology for putting 2, 3, or 4 GPUs in a single system to divide the work HD3D is their technology for supporting 3D monitors TressFX hair physics system The Mantle system to help port games from consoles to PCs Virtual Super Resolution: rendering at a higher resolution and then downsampling to your monitor's resolution Support for HDR monitors Nvidia is pushing for better ray tracing with their OptiX API  Iray is their interactive ray tracing technology   It's aimed at people using Maya and 3DS Max, not gamers  It's getting fast enough that it could be used in some games now  Mental Ray is their noninteractive ray tracing product, acquired in 2007  Real deal ray-tracing, used in Matrix 2 and 3 and Star Wars Episode II    G-SYNC adaptive VSync to avoid tearing and stuttering New TXAA antialiasing that combines MSAA and a temporal filter GPU support for PhysX physics engine      Compute Unified Device Architecture (CUDA) is Nvidia's GPGPU parallel computing platform Many high performance computing researchers are excited about it AMD isn't keeping up with CUDA (although they have GPGPU support through OpenCL and DirectX11) However, CUDA has a complex programming model It's intended for non-graphical purposes, but some special applications of image processing and "real time" ray tracing can show up in games  Nvidia and AMD keep a close watch on each other  It's not surprising that their innovations are similar They're pushing 4K, which is really just higher resolution  3D Vision is Nvidia's 3D display technology  Surround and 3D Vision Surround are Nvidia's multiple display technologies   They now support up to 5 monitors and even multiple 3D monitors SLI is Nvidia's technology for rendering with 2, 3, or 4 GPUs  HairWorks for hair rendering  Optimus is their power saving technology for laptops     AMD is a chip maker with their fingers in a lot of pies already Nvidia is trying to diversify with hardware support for deep learning and AI These are computationally intensive tasks with potential for massive parallelism  That's exactly what graphics companies are doing already      Both AMD and Nvidia are working to support DirectX 12 DirectX 12 is better at spreading work out over multiple CPU cores Another thing DirectX 12 can do is run shaders asynchronously More control is given to the programmer DirectX 12's focus seems to be on performance improvements, not new features OS or programming language often provides some 2D APIs  OpenGL is the big name in open source 3D (and 2D) graphics APIs  Glide was a proprietary API based on OpenGL and created by 3Dfx   It had sharp limitations like 16-bit color but was successful because it was simpler and easier to use than OpenGL or DirectX  DirectX is Microsoft's API  MonoGame is a C# wrapper to DirectX calls  Vulkan is a new API that will supposedly be used by the Source 2 engine Starting in the mid-2000s, many games have begun to have complex physics simulation engines  Nvidia's PhysX technology is used in the Unreal, Unity 3D, Diesel, and Torque engines  The Havok physics engine is used in the BioShock series, Call of Duty Black Ops, Oblivion and Skyrim, Fallout 3 and New Vegas, Halflife 2 and Portal 1 and 2, and many others  The open source Box2D was used in Angry Birds and Limbo   Video games generate more revenue than Hollywood movies  In 2015, video games brought in $71 billion compared to the $38 billion of box office revenues        101.6 million Wiis 13.4 million Wii Us 80 million Xbox 360s 19 million Xbox Ones 80 million PS3s 53.4 million PS4s PC games are a small part of the pie  The Wii U was released in late 2012  Reviews have been mixed  13.4 million sold  Playstation 4 was released at the end of 2013  x86 (well, really x64) architecture AMD chip  53.4 million sold  Xbox One was also released at the end of 2013  Another 64-bit x86 AMD chip  19 million sold  The Valve Steam Machine came out at the end of 2015  Really just a set of specifications for a PC  Nintendo Switch was released March 3, 2017  Too early to say how it will do  Zelda: Breath of the Wild is currently outselling Nintendo Switch?     The Wii made a big splash with its accelerometer based controls The PlayStation Move came out years later in an effort to compete The Xbox Kinect has no contact with the player at all These forms of interaction as well as Facebook, phone, and other casual gaming outlets have dramatically increased gaming, especially among women    Phone games like Angry Birds have had great financial success despite being 2D Some developers have used Kickstarter to fund projects that large studios would not funded 2D games have received millions of dollars in Kickstarter campaigns  Torment: Tides of Numenera got $4,188,927  Pillars of Eternity got $4,163,208  Wasteland 2 got $2,933,252 SIGGRAPH 2012    Skeleton based animation has been studied heavily New research looks at the movement of soft bodies without skeletons A finite element approach is used  Performing motion capture is difficult  Often uses sensors placed all over the body  Or multiple or binocular cameras  New research allows reasonably good motion capture with a single camera based on the constraints of a normal human walking on two feet Selecting 3D shapes from a scene or a database requires some technical knowledge or experience with a tool  This research takes a 2D black and white sketch and returns the matching object  Decoupling Algorithms from Schedules for Easy Optimization of Image Processing Pipelines  In graphics, easy to read code is often not welloptimized  This research creates a platform for describing the algorithm and the optimization in clean code    Research continues in filtering These researchers claim to do filtering in very high dimensions  I'm not sure what that means  The pictures below use 8D, 5D, and 27D, respectively   This research works on a method to simulate clothing movement and appearance Once the clothing movement has been learned, it can be applied to people with different shapes and sizes  A similar research project takes patterns used for the clothes of one character and changes them appropriately to fit another   Continuing in the clothing theme, these guys automatically create realistic rendering of sweaters They start with a mesh, apply a stitch pattern to it, and perform several relaxations   Rather than directly motion capturing hand movements (tricky), they capture the places where the hand makes contact with items Then, they create physically plausible reconstructions of possible movements  Doing so allows them to randomize the process so that the same motion could look slightly different   Tracking things with our eyes (and consequently head) is an important part of how we work By using eye movements as a feature of animation, they can generate more believable responses to events with simulated characters   Mapping 2D images to 3D objects is hard This research allows a user to add crosssection curves to guide the reconstruction process   What if you wanted to turn your 3D (computer) model into a 3D (real) model? This research turns a skinned mesh into a model that can be created with articulation points and generated with a 3D printer  So you can play with it!   Or, if you prefer beads, you can create a 3D model and have a system automatically compute the bead location and wire paths so that you can make a beadwork version of it It prints instructions you can follow!    People have worked a fair bit on modeling trees This research takes an existing tree model and deforms it to its environment It approximates biological reactions to space and light constraints   Rendering fluids is difficult, especially different fluids mixing together These guys have a way to capture the 3D process of fluids mixing    It takes a long time to model things What if we gave you a few sample things? This research takes training models and makes variations based on those models  1267 new airplanes from 100 input models  Really, it's more of the same, except that they're making lamps, not planes   People continue to work on complex ray tracing methods and how to make them more efficient This research focuses on ray tracing in media with volumetric scattering  It uses something called "virtual ray lights" Animating fluids is hard  These guys use a Smoothed Particle Hydrodynamics approach for simulating liquid   Their improvement is special "ghost" particles in the air and nearby surface to provide realistic surface tension properties   Another difficulty is modeling the effects of water on rigid objects and vice versa These guys do some clever sampling of where surfaces interact with the fluid    Or the foam in the liquid might be what you focus on These guys use weighted Voronoi diagrams to approximate the bubble dynamics Each frame of 100,000 bubbles can be simulated in about 20 seconds They want to analyze video for subtle temporal changes that your eyes don't pick up on  They amplify changes in the signal   The example shows the flow of blood to the man's face because of his heart beat   A single frame of video is often blurry These guys use techniques to remove large scale motions from a single frame, deblurring it   This is another paper about rendering woven objects It uses scanned samples of fabric textures to create volumetric effects  They're using high resolution displays like iPads to correct vision  You don't have to wear contact lenses or glasses to look at it! In the area of collision response, people are working to deal with large numbers of collisions in close proximity without weird jittering artifacts  Below:   5000 boxes coming to rest on an irregular surface, rendered at 60 fps  Fracture simulation  Building destruction in real time   Review up to Exam 1 IDEA evaluations  Finish Assignment 5  Due tonight by midnight  Keep working on Project 4  Due next Friday  Begin reviewing for the final