<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Yang (Kevin) Liu]]></title><description><![CDATA[Yang Liu is a graphics programmer passionate about making graphics systems more efficient, approachable and extensible.]]></description><link>https://www.keliu.info/</link><generator>Ghost 2.37</generator><lastBuildDate>Wed, 06 May 2026 10:55:30 GMT</lastBuildDate><atom:link href="https://www.keliu.info/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Rendering Competition Project]]></title><description><![CDATA[Our group implemented participating media, hair BSDF and volumetric photon mapping to render this image of a feather hidden inside an amber.]]></description><link>https://www.keliu.info/rendering-algorithm-final-project/</link><guid isPermaLink="false">61afeff786135146354d415d</guid><category><![CDATA[Portfolio]]></category><category><![CDATA[Personal Projects]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Wed, 01 Dec 2021 23:36:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2021/12/merge2.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2021/12/merge2.png" alt="Rendering Competition Project"><p>Our group implemented participating media, hair BSDF and volumetric photon mapping to render this image of a feather hidden inside an amber. This image was selected as the runner-up in the rendering competition in the <a href="https://cs87-dartmouth.github.io/Fall2021/">Rendering Algorithms</a> course at Dartmouth. The theme for the rendering competition was "It's what's inside that counts".</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2021/12/merge2-2.png" class="kg-image" alt="Rendering Competition Project"><figcaption>Final image submission</figcaption></figure><!--kg-card-end: image--><p>If you want to learn about the technical details, the full project report is <a href="https://ghostatspirit.github.io/darts_final_project/report.html">here</a>.</p><p>My teammate, Yang Qi, was responsible for adding participating media, volumetric path tracer, microfacet BSDF and hair BSDF to our base renderer. I worked on implementing a progressive volumetric photon mapper and directional lights to achieve the caustic and light beam effects we wanted.</p><p>The photon mapper enabled us to render the caustic effect within reasonable time. Here is a comparison between path tracer with MIS (PT), photon mapper (PM), and stochastic progressive photon mapper (SPPM), rendering our caustics test scene. As you can see, photon mapper provides much faster convergence rate on caustics, and SPPM can reduce the bias on caustics caused by blurring, producing much defined edges on caustics.</p><!--kg-card-begin: gallery--><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2021/12/amber_box_mis.png" width="640" height="480" alt="Rendering Competition Project"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2021/12/amber_box_pm.png" width="640" height="480" alt="Rendering Competition Project"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2021/12/amber_box_sppm.png" width="640" height="480" alt="Rendering Competition Project"></div></div></div><figcaption>Left: Path Tracer (MIS). Middle: Photon Mapper. Right: Stochastic Progressive Photon Mapper.</figcaption></figure><!--kg-card-end: gallery-->]]></content:encoded></item><item><title><![CDATA[PIC-FLIP Fluid Simulation]]></title><description><![CDATA[The goal of this project is to implement the PIC-FLIP fluid simulation from thepaper Animating Sand as a Fluid by Yongning Zhu and Robert Bridson.]]></description><link>https://www.keliu.info/pic-flip-fluid-simulation/</link><guid isPermaLink="false">61b03c8c86135146354d4226</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Wed, 16 Dec 2020 05:03:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2021/12/heart-scene-flip.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2021/12/heart-scene-flip.jpg" alt="PIC-FLIP Fluid Simulation"><p>The goal of this project is to implement the PIC-FLIP fluid simulation from the<br>paper Animating Sand as a Fluid by Yongning Zhu and Robert Bridson. This is a final project for the <a href="https://www.cs.dartmouth.edu/~bozhu/cosc89.18.html">Computation Method for Physical Systems</a> course at Dartmouth.</p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="200" height="150" src="https://www.youtube.com/embed/DfMK5B7ZQcg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><p>This is a group project with a group size of 2. I worked on coding marker particles, implementing PIC/FLIP algorithm, figuring out how to enforce boundary conditions and adapting the cell-centered functions to work with MAC grid. </p><p>Our group also tried comparing the results of using PIC, FLIP and combined PIC/FLIP. From the images below, we can see that the PIC method has a subdued wave, while the FLIP method shows many fluid particles breaking apart from the main fluid body. This demonstrates that PIC tends to dampen fluid due to the loss in velocity during interpolation. On the other hand, FLIP can preserve the lost energy from the PIC method, but is often noisy and can become unstable.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2021/12/PIC_FLIP_comparisons.PNG" class="kg-image" alt="PIC-FLIP Fluid Simulation"><figcaption>Comparisons between PIC, FLIP, and PIC/FLIP results at 5 seconds into the start of the simulation with the same start conditions.</figcaption></figure><!--kg-card-end: image--><p>Our biggest challenge in the project was to figure out how to enforce boundary conditions on cell-centered grids in the projection step. We found going through Chapter 5 of Fluid Simulation for Computer Graphics book really helpful to us. It helped us understand the how the boundary conditions work mathematically. After a lot of trial and error we found that adapting the cell-centered velocity field to the face-centered velocity field by interpolation worked the best, since most of the equations makes more sense with face-centered grids.</p><p>If you want to learn more about how to simulate fluid using PIC/FLIP, check out our course report <a href="https://1drv.ms/b/s!Apnacu5dG287gdpCb5JKBqnKhKasgw">here</a>.</p>]]></content:encoded></item><item><title><![CDATA[Automatic Panoramas]]></title><description><![CDATA[A post about how to produce environment maps with automatic panorama stitching.]]></description><link>https://www.keliu.info/automatic-paranomas/</link><guid isPermaLink="false">61b06a9286135146354d4292</guid><category><![CDATA[Blog]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Mon, 02 Nov 2020 16:01:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2021/12/diffRenderPanorama-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2021/12/diffRenderPanorama-1.png" alt="Automatic Panoramas"><p>I want to write down what our group has learned in our final project for the Computational Photography course. We wanted to explore how to capture a 360° environment map in real world and use that to perform image-based lighting in Maya. We tried two approaches, the mirror ball unwrapping approach and the panorama stitching approach. I will mainly talk about the panorama-based approach since that is what I worked on.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2021/12/diffRenderPanorama.png" class="kg-image" alt="Automatic Panoramas"><figcaption>Combining a real-world image with a rendered image generated by Arnold. The virtual objects are lit by an environment map generated by the panorama stitching approach. This image is generated by my teammate Yang Qi in the Arnold renderer.</figcaption></figure><!--kg-card-end: image--><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2021/12/FullPanorama_extrapolate--1-.png" class="kg-image" alt="Automatic Panoramas"><figcaption>The automatically stitched panorama used in the image above.</figcaption></figure><!--kg-card-end: image--><p>I mainly followed the slides and problem sets in MIT's <a href="http://stellar.mit.edu/S/course/6/sp15/6.815/">6.815 Digital Computation Photography</a> course to implement the auto-stitching part. The auto-stitching consists of several stages to produce high quality corresponding feature point pairs between two images. It mainly consists of 4 stages: corner detection, descriptor creation, correspondence search and RANSAC. </p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2021/12/auto-stitch-stages.png" class="kg-image" alt="Automatic Panoramas"><figcaption>The 4 stages of our auto-stitching pipeline.</figcaption></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><p>In the first stage, we use the Harris Corner Detector to produce the feature points, which involves find structure tensors, calculating responses and find the local maximums of the responses. We also need a way to describe the neighborhood of a feature point, which is called “descriptors”, to help us measure the similarity of two feature points. We just used a simple patch descriptor, which is just a $k \times k$ patch around a feature point with some gaussian blur and values normalized. After getting the descriptors, we simply used L2 distance to determine how &quot;close&quot; any two descriptor are and kept the descriptor pairs whose distance is under a threshold. We also used the second-best test to filter out too ambiguous matches. Finally, even after the second-best test, usually our descriptor pairs will still contain some outliers. We used RANSAC to filter out these final outliners.</p>
<p>RANSAC is really powerful for eliminating outliners in descriptor matches, as demonstrated in the image below. The green lines shows the inlier matches while the red lines shows the outliner matches. The blue lines shows the matches selected by RANSAC to calculated the final rotation matrix $R$ between the two images.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2021/12/ransac.png" class="kg-image" alt="Automatic Panoramas"></figure><!--kg-card-end: image--><p>After being able to stitch two images together with autostitching, we still need to figure out how to compose a full panorama by stitching N images. We can either stitch the images pair by pair locally, or use some global optimization approach. Due to the time constraint, we decided to use the local approach. We found that if you first stitch all pitch angles of a single yaw, then stich images of different yaw angles, you will tend to get much less ghosting in the output image.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2021/12/local-stitch.png" class="kg-image" alt="Automatic Panoramas"></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><p>And this is how we automatically generated the panorama image above given a series of individual images. Compared to the mirror ball unwrapping approach, the panorama-based approach can produce environment maps with much higher resolution. However, when using the panorama-based approach, the north and south pole area will cause a lot of problems. For example, if the sky is of a single color, the corner detector won't be able to find any corner and the whole pipeline will fail. To generate the final image combining a real-world photo with a virtual scene, we had to use some tricks (such as adding a hat to the mirror ball) to hide the holes in the north and south pole area.</p>
<p>We further compared these two methods in this table:</p>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><table><thead><tr><th>Method</th><th>Mirror Ball</th><th>Panorama</th></tr></thead><tbody><tr><td>Advantages</td><td>Requires only a few photos.<br>Contains the sky and ground regions naturally.</td><td>Higher resolution output.<br>Can capture up to 360 degrees in yaw.</td></tr><tr><td>Disadvantages</td><td>Output resolution is low.<br>Doesn’t capture the full 180 degrees in yaw (using only photos captured from 1 angle since our code does not stitch equirectangular maps generated from the ball).</td><td>Time-consuming photo capturing process.<br>Can’t stitch images with just the sky, resulting in a hole in the north pole.</td></tr></tbody></table><!--kg-card-end: html--><p>And this is basically all what we have learned in this project! I want to thank my teammates, Josephine Nguyen and Yang Qi, for their amazing work. Without them I wouldn't have the confidence to tackle such a broad project during COVID. I also had a lot of fun writing a job system to parallelize our C++ image processing code, but that will probably need its own post. Also, just for reference, Greg Zaal from HDRI Haven has a great <a href="https://blog.polyhaven.com/how-to-create-high-quality-hdri/">blog post</a> explaining what equipment he used to capture environment maps. I wish our media center also had a slide like his that can let the camera rotate around the entrance pupil...</p>]]></content:encoded></item><item><title><![CDATA[Foresight]]></title><description><![CDATA[Foresight is a VR camera system + mo-cap system aiming to expedite story and shot revision cycles in filmmaking.]]></description><link>https://www.keliu.info/foresight/</link><guid isPermaLink="false">5f62366086135146354d40a6</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Mon, 01 Jun 2020 22:41:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2020/09/dolly-final-cropped.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/0wtbfggElPg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><img src="https://www.keliu.info/content/images/2020/09/dolly-final-cropped.png" alt="Foresight"><p>Foresight is a VR camera system + mo-cap system aiming to expedite story and shot revision cycles in filmmaking. As a team, we tried visualizing different genres of scripts and built our own tools and pipelines along. If you want to check out our development team and the dev blogs, please visit <a href="http://www.etc.cmu.edu/projects/foresight/">our project website</a>.</p>
<p>In this project, my main focus is the VR camera system and I worked on a wide variety of tasks ranging from developing tools within Unity to implementing specific camera effects. Here is a brief video about the features in the camera system.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/sDY9eZGMtww?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><h3 id="mycontributions">My Contributions</h3>
<ul>
<li>Recreated camera properties and moves such as adjustable focal lengths, tilting/panning and dolly in VR.</li>
<li>Worked closely with the UI/UX designer to iterate on the camera panel UI and user interactions such as modifying track</li>
<li>Wrote and debugged shaders to implement a gaussian-based depth of field effect with tweakable aperture based on Skylanders SWAP Force’s method</li>
<li>Coded the serialization of camera motion and a tool to save/load camera motion</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="480" height="270" src="https://www.youtube.com/embed/8cz6iMOFEls?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Saving/loading camera motion in Foresight</figcaption></figure><!--kg-card-end: embed--><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="480" height="270" src="https://www.youtube.com/embed/kwfS29pnRvU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Depth of Field feature in Foresight</figcaption></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><p>Special thanks to my teammates <a href="http://www.angelajwchen.com/">Angela</a>, <a href="https://www.arnavbanerji.com/">Arnav</a>, <a href="https://www.zhanxinran.com/">Shera</a> and <a href="https://www.varunmehra.me/">Varun</a> for their exceptional work in this project and valuable feedback to the camera system. Also I would like to appreciate my faculty advisors, Mo and Chris.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Scotty3D]]></title><description><![CDATA[Scotty3D is a C++ 3D graphics toolkit including a mesh editor, a path tracer and an interactive animator.]]></description><link>https://www.keliu.info/scotty3d/</link><guid isPermaLink="false">5f61a06086135146354d4014</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Tue, 05 May 2020 14:39:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2020/09/miku-final.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2020/09/miku-final-2.png" class="kg-image" alt="Scotty3D"><figcaption>An action figure of Hatsune Miku, rendered by my implementation of Scotty3D.</figcaption></figure><!--kg-card-end: image--><!--kg-card-begin: markdown--><img src="https://www.keliu.info/content/images/2020/09/miku-final.png" alt="Scotty3D"><p>Scotty3D is a C++ 3D graphics toolkit including a mesh editor, a path tracer and an interactive animator. This is part of the coursework of the <a href="http://15462.courses.cs.cmu.edu/spring2020/">CMU 15-462/662 Computer Graphics</a> course and the base code is available on <a href="https://github.com/cmu462/Scotty3D">GitHub</a>.</p>
<p>The rendered image above used art assets by Rummy<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> and Breanne Millette<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup>.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Model: Miku Hatsune V3 (Rummy) by Rummy, Link: <a href="https://mikumikudance.fandom.com/wiki/Miku_Hatsune_V3_(Rummy)">https://mikumikudance.fandom.com/wiki/Miku_Hatsune_V3_(Rummy)</a>. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Skybox image: Carousel by Breanne Millette on Artstation, Link: <a href="https://www.artstation.com/artwork/xwWGX">https://www.artstation.com/artwork/xwWGX</a>. <a href="#fnref2" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="pathtracer">PathTracer</h3>
<ul>
<li>Coded key features for a Monte Carlo path tracer, including the BVH builder/traverser, the path tracing algorithm and importance-based environment map sampler</li>
<li>Implemented extra techniques such as multi-jittered sampling</li>
<li>Extended the Collada parser and the material class to support rendering textured surfaces</li>
</ul>
<h3 id="meshedit">MeshEdit</h3>
<ul>
<li>Finished key functions for a halfedge-based mesh editor</li>
<li>Coded local operations such as edge collapse and global operations such as isotropic remeshing</li>
</ul>
<h3 id="animation">Animation</h3>
<!--kg-card-end: markdown--><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/wrW-CI3l29s?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><ul>
<li>Finished key features of an interactive animator with skeleton kinematics</li>
<li>Implemented &quot;an integrator for the wave equation across the mesh&quot; <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup></li>
</ul>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Description copied from <a href="https://github.com/cmu462/Scotty3D/wiki/Physical-Simulation">the wiki page</a> of Scotty3D's GitHub repo. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deep]]></title><description><![CDATA[Deep is an immersive CAVE experience in which two guest will become the crew of a submarine and use their “vacuum” tools to collect lost treasures in danger underwater area.]]></description><link>https://www.keliu.info/deep/</link><guid isPermaLink="false">5df2adb686135146354d3f8b</guid><category><![CDATA[Portfolio]]></category><category><![CDATA[Building Virtual Worlds]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Thu, 12 Dec 2019 23:18:13 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/12/Final_Video_Moment.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/12/Final_Video_Moment.jpg" alt="Deep"><p>Deep is an immersive CAVE experience in which two guest will become the crew of a submarine and use their “vacuum” tools to collect lost treasures in danger underwater area. The guests will traverse through an undersea tunnel, dodge dangerous mines, collect valuables such as gold coins and pearls and reach the wreckage of a sunken pirate ship.</p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/MmfYqIE2v2c?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><h3 id="aboutthisproject">About this project</h3>
<ul>
<li>Three-week BVW project with a team of 5.</li>
<li>I was mainly the graphics programmer and the producer in our team.</li>
<li>Tools used: Unity, Substance Designer, Photoshop</li>
</ul>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Implemented a screen-space underwater shader featuring depth fog and simulated surface scattering of sun light.</li>
<li>Created the cone tornado special effect shader for the vaccuum tools.</li>
<li>Built a cone mesh generator to avoid creating different cone meshes from external tools.</li>
<li>Tuned parameters for effects such as bloom and SSAO in the post processing stack to better convey a mysterious and adventurous feeling.</li>
</ul>
<h3 id="screenshots">Screenshots</h3>
<p><img src="https://www.keliu.info/content/images/2019/12/Screenshot--14-fds.png" alt="Deep"><br>
<img src="https://www.keliu.info/content/images/2019/12/Screenshot--21-.png" alt="Deep"><br>
<img src="https://www.keliu.info/content/images/2019/12/Screenshot--30-fdsf.png" alt="Deep"></p>
<!--kg-card-end: markdown--><p></p>]]></content:encoded></item><item><title><![CDATA[Fluid Rendering in LabX]]></title><description><![CDATA[LabX aims to simulate chemistry and physics experiments in Mixed Reality for educational purposes.]]></description><link>https://www.keliu.info/fluid-rendering-in-labx/</link><guid isPermaLink="false">5de7c33537eea23f208b9e8e</guid><category><![CDATA[Portfolio]]></category><category><![CDATA[Personal Projects]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Mon, 05 Aug 2019 19:25:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/12/cloth-fluid-interaction.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/12/cloth-fluid-interaction.png" alt="Fluid Rendering in LabX"><p>LabX aims to simulate chemistry and physics experiments in Mixed Reality for educational purposes. I implemented and optimized the whole real-time, screen-space fluid rendering pipeline of LabX and achieved an average frame time of 9.02 ms on GeForce GTX 1060 6GB when simulating 20K particles at 1080p.</p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="480" height="270" src="https://www.youtube.com/embed/SKcvqPsQyP0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>LabX Demo Video - Fluid Rendering Part</figcaption></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Rendered smooth particle-based fluid in real time with the screen-space filtering approach (20K particles, 1080p, 9.02ms average frame time on GeForce GTX 1060 6GB)</li>
<li>Calculated and applied particle anisotropy efficiently using compute shaders to improve surface smoothness</li>
<li>Implemented and compared two common filtering approaches, screen-space curvature flow and bilateral filtering</li>
<li>Identified performance bottlenecks with tools such as Nsight, then reduced bilateral filter running time by up to 27.9% using techniques such as utilizing shared memory and employing loop unrolling</li>
</ul>
<h3 id="pipelineperformancemetrics">Pipeline Performance Metrics</h3>
<p>The fluid rendering pipeline was tested in two different test scenes. The first &quot;Single Dam Break&quot; scene only tests the rendering of simple fluid, while the second &quot;Fluid–rigid body Interaction&quot; scene trials the performance of the pipeline during the interation between 3D fluid and rigid bodies.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: gallery--><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/12/single-dam-break.png" width="1920" height="1080" alt="Fluid Rendering in LabX"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/12/lab-test.png" width="1920" height="1080" alt="Fluid Rendering in LabX"></div></div></div><figcaption>Left: Single Dam Break scene; Right: Fluid-rigid body Interaction scene</figcaption></figure><!--kg-card-end: gallery--><!--kg-card-begin: markdown--><p><em>Performance of the single dam break scene</em><br>
<em>1080p, 3.1K particles, using compute shader bilateral filtering with a diameter of 9 pixels</em></p>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><table>
  <col>
  <colgroup span="2"></colgroup>
  <colgroup span="2"></colgroup>
  <tr>
    <td rowspan="2"></td>
    <th colspan="2" scope="colgroup">Intel Core i7-6700HQ<br>NVIDIA GeForce GTX 960M 2GB</th>
    <th colspan="2" scope="colgroup">Intel Core i7-6700HQ<br>NVIDIA GeForce GTX 1060 6GB</th>
  </tr>
  <tr>
    <th scope="col" style="text-align:right;">CPU Time(ms)</th>
    <th scope="col" style="text-align:right;">GPU Time(ms)</th>
    <th scope="col" style="text-align:right;">CPU Time(ms)</th>
    <th scope="col" style="text-align:right;">GPU Time(ms)</th>
  </tr>
    <tr>
    <td>Cascaded Shadowmaps (4 cascades)
    </td><td style="text-align:right;">0.04</td>
    <td style="text-align:right;">2.75</td>
    <td style="text-align:right;">&lt;0.02</td>
    <td style="text-align:right;">0.94</td>
  </tr>
  <tr>
    <td>Calculate Anisotropy
    </td><td style="text-align:right;">&lt;0.01</td>
    <td style="text-align:right;">0.10</td>
    <td style="text-align:right;">&lt;0.01</td>
    <td style="text-align:right;">0.03</td>
  </tr>
  <tr>
    <td>Particle Depth Splatting
    </td><td style="text-align:right;">0.02</td>
    <td style="text-align:right;">3.97</td>
    <td style="text-align:right;">&lt;0.01</td>
    <td style="text-align:right;">1.45</td>
  </tr>
  <tr>
    <td>Particle Thickness Splatting
    </td><td style="text-align:right;">0.02</td>
    <td style="text-align:right;">0.10</td>
    <td style="text-align:right;">&lt;0.01</td>
    <td style="text-align:right;">0.02</td>
  </tr>
  <tr>
    <td>Depth Smoothing
    </td><td style="text-align:right;">0.02</td>
    <td style="text-align:right;">0.03</td>
    <td style="text-align:right;">0.03</td>
    <td style="text-align:right;">0.01</td>
  </tr>
  <tr>
    <td>Thickness Smoothing
    </td><td style="text-align:right;">0.09</td>
    <td style="text-align:right;">7.46</td>
    <td style="text-align:right;">0.04</td>
    <td style="text-align:right;">2.42</td>
  </tr>
  <tr>
    <td>Image Synthesis
    </td><td style="text-align:right;">0.03</td>
    <td style="text-align:right;">0.20</td>
    <td style="text-align:right;">0.01</td>
    <td style="text-align:right;">0.08</td>
  </tr>
  <tr>
    <td>Total
    </td><td style="text-align:right;">~0.23</td>
    <td style="text-align:right;">14.61</td>
    <td style="text-align:right;">~0.13</td>
    <td style="text-align:right;">4.95</td>
  </tr>
</table><!--kg-card-end: html--><!--kg-card-begin: markdown--><p><em>Performance of the fluid-rigid body interaction scene</em><br>
<em>1080p, 20.2K fluid particles, using adaptive radius bilateral filtering with a maximum diameter of 41 pixels</em></p>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><table>
  <col>
  <colgroup span="2"></colgroup>
  <colgroup span="2"></colgroup>
  <tr>
    <td rowspan="2"></td>
    <th colspan="2" scope="colgroup">Intel Core i7-6700HQ<br>NVIDIA GeForce GTX 960M 2GB</th>
    <th colspan="2" scope="colgroup">Intel Core i7-6700HQ<br>NVIDIA GeForce GTX 1060 6GB</th>
  </tr>
  <tr>
    <th scope="col" style="text-align:right;">CPU Time(ms)</th>
    <th scope="col" style="text-align:right;">GPU Time(ms)</th>
    <th scope="col" style="text-align:right;">CPU Time(ms)</th>
    <th scope="col" style="text-align:right;">GPU Time(ms)</th>
  </tr><tr>
  <td>Update Rigidbody Poses
  </td><td style="text-align:right;">0.33</td>
  <td style="text-align:right;">0.00</td>
  <td style="text-align:right;">0.03</td>
  <td style="text-align:right;">0.00</td>
</tr>
<tr>
  <td>Calculate Anisotropy
  </td><td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">0.38</td>
  <td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">0.22</td>
</tr>
<tr>
  <td>Particle Depth Splatting
  </td><td style="text-align:right;">0.01</td>
  <td style="text-align:right;">11.25</td>
  <td style="text-align:right;">0.01</td>
  <td style="text-align:right;">4.32</td>
</tr>
<tr>
  <td>Particle Thickness Splatting
  </td><td style="text-align:right;">0.01</td>
  <td style="text-align:right;">0.72</td>
  <td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">0.19</td>
</tr>
<tr>
  <td>Thickness Smoothing
  </td><td style="text-align:right;">0.02</td>
  <td style="text-align:right;">0.02</td>
  <td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">0.01</td>
</tr>
<tr>
  <td>Depth Smoothing
  </td><td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">14.05</td>
  <td style="text-align:right;">&lt;0.01</td>
  <td style="text-align:right;">4.17</td>
</tr>
<tr>
  <td>Image Synthesis
  </td><td style="text-align:right;">0.02</td>
  <td style="text-align:right;">0.30</td>
  <td style="text-align:right;">0.01</td>
  <td style="text-align:right;">0.11</td>
</tr>
<tr>
  <td>Total
  </td><td style="text-align:right;">~0.41</td>
  <td style="text-align:right;">26.72</td>
  <td style="text-align:right;">~0.09</td>
  <td style="text-align:right;">9.02</td>
</tr>
</table><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Buoyancy Lab]]></title><description><![CDATA[Buoyancy Lab provides a VR environment that can simulate and visualize hydrostatic forces. It is designed to make learning physics concepts more engaging to middle school students.]]></description><link>https://www.keliu.info/buoyancy-lab/</link><guid isPermaLink="false">5c61d83d37eea23f208b9df4</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Wed, 15 Aug 2018 08:41:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/vlcsnap-2018-12-15-13h04m38s743.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/vlcsnap-2018-12-15-13h04m38s743.png" alt="Buoyancy Lab"><p>Buoyancy Lab provides a VR environment that can simulate and visualize hydrostatic forces. It is designed to make learning physics concepts more engaging to middle school students. The lab has 3 modes: the inspect mode, the build mode and the play mode.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/VlaICBmTrrk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>Lead Programmer</td>
</tr>
<tr>
<td>Timeline</td>
<td>March 2018 - June 2018</td>
</tr>
<tr>
<td>Tools Used</td>
<td>Unity, HTC Vive, Photoshop</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Implemented physics based hydrostatic force simulation with the formula $ \mathrm{d}\vec{F}=\rho gz\mathrm{d}S\vec{n} $.  <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup></li>
<li>Exploited parallelism in the algorithm and achieved up to 4x frame rate with a job system.</li>
</ul>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Based on Jacques Kerner's <a href="https://www.gamasutra.com/view/news/237528/Water_interaction_model_for_boats_in_video_games.php">Water interaction model for boats in video games</a> <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/vlcsnap-2018-06-28-00h23m56s989.png" class="kg-image" alt="Buoyancy Lab"><figcaption>In the inspect mode, the user can check different physics properties of a floating object.</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/BuoyancyProfiler_BOTH.png" class="kg-image" alt="Buoyancy Lab"><figcaption>The profiled performance without multi-threading (top) vs. with multi-threading (bottom). The elapsed time in the red boxes shows that this optimization can reduce simulation time by about 75%.&nbsp;</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Project Bastion]]></title><description><![CDATA[Project Bastion is a “shoot ’em up” game happening in 3D space: players build their own self-defense bastions and destroy others’ bastions and compete for resources.]]></description><link>https://www.keliu.info/project-bastion/</link><guid isPermaLink="false">5c61ce0a37eea23f208b9d4d</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Fri, 29 Jun 2018 19:33:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/IMG_0280-1.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/IMG_0280-1.PNG" alt="Project Bastion"><p>Project Bastion is a “shoot ’em up” game happening in 3D space: players build their own self-defense bastions and destroy others’ bastions and compete for resources. You need to maneuver through the dense barrage created by different kinds of enemy projectiles and try to locate and destroy the energy core somewhere inside the bastion to disable this devastating defense machine. Apart from challenging yourself in various pre-built levels, you can also unleash your creativity in the bastion builder mode. The bastion builder mode allows you to build your own self-defense bastions with 8 different kinds of bastion modules. You can fill the space with bullets with the Stormer module, or create a temporal protective shield using the Defender module. Build your invincible bastion and challenge your friends to destroy it!</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/C2aRSeUbbZQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>Graphics Programmer, Lead Designer</td>
</tr>
<tr>
<td>Timeline</td>
<td>September 2017 - May 2018</td>
</tr>
<tr>
<td>Tools Used</td>
<td>Unity, ARKit, Xcode, Photoshop</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Designed the core gameplay mechanics and implemented behaviors for more than 5 kinds of enemy turrets</li>
<li>Designed and coded the bastion builder system with serialization/deserialization and a flexible UI</li>
<li>Utilized custom shader to achieve special effects such as holograms</li>
<li>Reduced frame time by about 50% with techniques such as batching and instancing</li>
</ul>
<figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/02/IMG_0280.PNG" width="2048" height="1536" alt="Project Bastion"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/02/IMG_0277.PNG" width="2048" height="1536" alt="Project Bastion"></div></div></div><figcaption>Some screen shots from an iPad Pro 9.7-inch.</figcaption></figure><h3 id="intentions">Intentions</h3>
<p>Many of the first wave of AR games, like the pet simulator AR Dragon and the puzzle game Conduct AR!, treats players as passive viewers of the virtual objects with limited agency to interact with the virtual world. However, one AR game called Pigeon Panic! is different because it uses player's movements in the real world directly as inputs to the amusing reactions of some scared virtual pigeons. This idea of a game world reacting to players' physical movements, together with my interests in traditional hardcore STGs like Strikers 1945, gave birth to the core mechanic in Project Bastion: players shoot at a bastion equipped with turrets while dodging incoming bullets.</p>
<p>After a short prototype stage, my work shifted to designing and implementing a variety of turrets. My first priority was to recreate the most important element of pleasure in traditional bullet hell games: the element of ilinx. The thrill of vertigo comes from the illusion that we become more vulnerable under the overwhelming bullet barrage. Thus the turret called &quot;Stormer&quot; was firstly created, which fires rapidly around you instead of towards you. Other kinds turrets was added to develop different rhythms. The red bullets fired by Crimson require instant reaction from the player, while the sphere energy shields generated by Defender encourage patience and push players to switch firing priorities. A flexible bastion builder system was designed and implemented later first to shorten our team's iteration time, as it often takes at least half an hour to install this game to an iOS device. Later we also decided to expose it to players so that a player can design a customized bastion and invite his friends to destroy it.</p>
]]></content:encoded></item><item><title><![CDATA[AR Graffiti]]></title><description><![CDATA[AR Graffiti is an eco-friendly graffiti app that empowers artists to create street art anywhere, anytime.]]></description><link>https://www.keliu.info/ar-graffiti/</link><guid isPermaLink="false">5c61d5da37eea23f208b9db7</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Mon, 25 Jun 2018 20:06:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/IMG_0431.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/IMG_0431.jpg" alt="AR Graffiti"><p>AR Graffiti is an eco-friendly graffiti app that empowers artists to create street art anywhere, anytime. Choose a wall to paint on, pick up the spray can of your color, shake your device and just start painting! Made with Unity + Apple ARKit.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/voKrZzp0Fbc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>Programmer, Designer</td>
</tr>
<tr>
<td>Timeline</td>
<td>March 2018 - June 2018</td>
</tr>
<tr>
<td>Tools Used</td>
<td>Unity, ARKit, Xcode, Photoshop</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Implement a customized shader with easily congurable properties to simulate realworld spray paints: more area will be covered if the user sprays at a greater distance</li>
<li>Built a scalable art board system capable of redoing &amp; undoing with Render To Texture and achieved signicant frame rate gain</li>
</ul>
<figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/02/IMG_0431-1.PNG" width="750" height="1334" alt="AR Graffiti"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/02/IMG_0433-1.PNG" width="750" height="1334" alt="AR Graffiti"></div><div class="kg-gallery-image"><img src="https://www.keliu.info/content/images/2019/02/IMG_0435-1.PNG" width="750" height="1334" alt="AR Graffiti"></div></div></div><figcaption>Choose a color, paint by aiming and share with your friends!</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Hanabi Particle System]]></title><description><![CDATA[Hanabi is a mesh particle system demo that simulates petals and grass with modern OpenGL. ]]></description><link>https://www.keliu.info/hanabi-particle-system/</link><guid isPermaLink="false">5c61cae137eea23f208b9d26</guid><category><![CDATA[Portfolio]]></category><category><![CDATA[Personal Projects]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Sat, 03 Feb 2018 19:19:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/screenshot-1.PNG" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/screenshot-1.PNG" alt="Hanabi Particle System"><p>Hanabi is a mesh particle system demo that simulates petals and grass with modern OpenGL. Users can generate petals with a click of mouse, feeling the beauty of nature. The grass and the light would also naturally follow your mouse.</p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/B5nxV81KJz8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><!--kg-card-begin: markdown--><p>Source code has been uploaded to <a href="https://github.com/GhostatSpirit/Hanabi2">GitHub</a>.</p>
<table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>Programmer, Designer</td>
</tr>
<tr>
<td>Timeline</td>
<td>March 2018 - June 2018</td>
</tr>
<tr>
<td>Tools Used</td>
<td>OpenGL, Visual Studio</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Utilized the geometry shader to generate individual grass blades. Calculate and pass in the world position of the cursor to make the grass blades &quot;breathe&quot; with the cursor.</li>
<li>Implemented basic light shading with Phong Illumination Model. Improved performance with the half-way vector optimization.</li>
<li>Implemented basic physics simulation for the petals.</li>
<li>Optimized performance with methods such as GPU Instancing.</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.keliu.info/content/images/2019/02/screenshot.PNG" class="kg-image" alt="Hanabi Particle System"><figcaption>Generate a vibrant environment with simple mouse interaction.</figcaption></figure><!--kg-card-end: image-->]]></content:encoded></item><item><title><![CDATA[Re: Link]]></title><description><![CDATA[A cooperative action game where two players try to survive in a dystopian cyber world.]]></description><link>https://www.keliu.info/re-link/</link><guid isPermaLink="false">5c61ddc137eea23f208b9e60</guid><category><![CDATA[Portfolio]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Tue, 01 Aug 2017 14:30:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/2018-03-26--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/2018-03-26--1-.png" alt="Re: Link"><p>In the game, two players cooperate as Hacker and AI, trying to survive by turning different enemies into allies and taking advantage of the environment with their connect and disconnect abilities in the cyber world.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/DmSWNEcOWpA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>Lead Programmer &amp; Designer</td>
</tr>
<tr>
<td>Timeline</td>
<td>January 2017 - June 2017</td>
</tr>
<tr>
<td>Tools Used</td>
<td>Unity, Photoshop</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Implemented 3 distinct enemy AIs based on finite state machine</li>
<li>Designed and implemented two complementary player skills: Connect and Disconnect</li>
<li>Design room layouts for the level and coded environmental puzzles such as the laser tower</li>
</ul>
<figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/2017-06-20--56-.png" class="kg-image" alt="Re: Link"><figcaption>Fighting the BOSS - Cyber Octopus.</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/2017-06-20--36-.png" class="kg-image" alt="Re: Link"><figcaption>In the laser room, you need to stay in the shadows to stay safe.</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/Level-Design.png" class="kg-image" alt="Re: Link"><figcaption>My initial design draft for the level.</figcaption></figure>]]></content:encoded></item><item><title><![CDATA[Light Chaser]]></title><description><![CDATA[Light Chaser is a top-down shooting game where two alien creatures fire light waves to illuminate their surroundings and explore the scene.]]></description><link>https://www.keliu.info/light-chaser/</link><guid isPermaLink="false">5c61d08237eea23f208b9d79</guid><category><![CDATA[Portfolio]]></category><category><![CDATA[Game Jam]]></category><dc:creator><![CDATA[Yang Liu]]></dc:creator><pubDate>Sun, 05 Mar 2017 10:32:00 GMT</pubDate><media:content url="https://www.keliu.info/content/images/2019/02/gfwqRT.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://www.keliu.info/content/images/2019/02/gfwqRT.jpg" alt="Light Chaser"><p>Light Chaser is a top-down shooting game where two alien creatures fire light waves to illuminate their surroundings and explore the scene. During their exploration process, they also need to locate and neutralize their enemies from the dark. Light is a kind of wave, so it can naturally bounce between walls. The light source you fired can also be retrieved for a faster reload. Utilize your light waves to expand your range of view and restrict your opponent's vision.</p><figure class="kg-card kg-embed-card"><iframe width="480" height="270" src="https://www.youtube.com/embed/Ouq11LRebxk?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><table>
<thead>
<tr>
<th>Item</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Role</td>
<td>System Designer, Programmer</td>
</tr>
<tr>
<td>Timeline</td>
<td>January 2017</td>
</tr>
<tr>
<td>Tools Used</td>
<td>Unity, Photoshop, BFXR</td>
</tr>
</tbody>
</table>
<h3 id="keyactivities">Key Activities</h3>
<ul>
<li>Designed and implemented the rules around the &quot;light wave&quot;, such as light bouncing and light retrieving</li>
<li>Utilized and optimized sprite-based 2D lighting to deliver a dynamic sense of exploration and a thrill of the unknown with hundred of lights blended on screen</li>
</ul>
<figure class="kg-card kg-image-card"><img src="https://www.keliu.info/content/images/2019/02/u0er1h.png" class="kg-image" alt="Light Chaser"><figcaption>An in-game screenshot. This is a local split-screen multiplayer game.</figcaption></figure><h3 id="downloadinstallinstructions">Download &amp; Install Instructions</h3>
<p>Currently, Light Chaser only support the Windows platform. Please visit Light Chaser's itcho.io page to download the latest version of the game: <a href="https://lykavin.itch.io/light-chaser">https://lykavin.itch.io/light-chaser</a>.<br>
After downloading the zip file and extracting it, connect two controllers to your PC and double-click  LightChaser.exe to launch the game. Common controllers such as Dualshock 4, Xbox One Controllers are all supported. Visit this link to view all supported controllers: <a href="http://www.gallantgames.com/pages/incontrol-supported-controllers">http://www.gallantgames.com/pages/incontrol-supported-controllers</a>.</p>
<h3 id="interactioninstructions">Interaction instructions</h3>
<p>You can always click on the &quot;help&quot; button in the main menu to learn about all the basic controls for this game:</p>
<ul>
<li>Use the left joystick to move around.</li>
<li>Use the right joystick to aim.</li>
<li>Use the right trigger button (R2 on DualShock controllers, RT on Xbox controllers) to fire a light wave.</li>
<li>While playing, press the &quot;Option&quot; button on DS4 / &quot;Back&quot; button on the Xbox 360 Controller / &quot;Menu&quot; button on the Xbox One Controller to return to the main menu.</li>
<li>While playing, you can also press the &quot;Esc&quot; key on your keyboard to close the game directly.</li>
</ul>
<h3 id="intentions">Intentions</h3>
<p>This game was initially created at Global Game Jam '17. The theme for that year's GGJ was &quot;waves&quot;. Intrigued by real-time lighting at that time, I proposed the idea of player firing &quot;light waves&quot; at each other in a dark environment in our initial brainstorm. After some discussions, our team settled on this idea and started prototyping. After thinking deeper about the property of light waves, I introduced light bouncing to make illuminating the environment easier. The ability to retrieve the light source was also implemented to achieve an input pattern of &quot;fire a light wave, follow the light path and retrieve the light source&quot;. These all add up to a sense of exploration with a natural rhythm in this game.</p>
<p>Later, I realized that the excitement from exploration will diminish as the player brightens every corner of the map. To solve this problem, we allowed a player's light wave to also darken his or her opponent's vision. This mechanic also helped us build a sense of thrill, as the player's range of view will be constantly transforming and the uncertainty about the opponent's position will be constant.</p>
<p></p><p></p>]]></content:encoded></item></channel></rss>