ShaderToy shader tutorial: Color Gradient to an Animated Mandelbrot in a GPU Shader (Part 1)
Shader Basics: Go from a Color Gradient to an Animated Mandelbrot in a GPU Shader in One Lesson To play with shaders, go to https://www.shadertoy.com/
Home » Tutorials
Shader Basics: Go from a Color Gradient to an Animated Mandelbrot in a GPU Shader in One Lesson To play with shaders, go to https://www.shadertoy.com/
In part 5 of our raytracing port, we clean up the code a bit and then add camera/viewport animation and a checkerboard ground texture.
In part 4 of our raytracer porting blog, we fix the SDL event pump, add a PNG saver with libspng, remove the preset resolution limits and add a unit test using CTest.
In our OpenCV camera calibration tutorial, we show how to get started with OpenCV by setting up the software, printing and capturing a calibration chessboard, and then calibrate the camera using the captured images.
How AlphaPixel created a 3D interactive visualization of Mars using CesiumJS, covering the technical aspects of rendering a virtual Mars globe in a WebGL-capable browser, including configuring ellipsoids, applying real-world data layers, and adding interactivity for dynamic Mars data exploration.
The article explores adding a modern SDL-based UI to a legacy Amiga raytracer. It covers integrating SDL with CMake, updating UI code from Amiga OS, and resolving rendering performance issues. By using SDL’s cross-platform libraries, the article demonstrates how to modernize graphical functionality while preserving the original program logic.
This article details the process of compiling and running legacy Amiga-based raytracer code in Visual Studio Code. It covers updating old C code, handling missing libraries, and adapting the original environment using modern tools like CMake. The step-by-step guide ensures the code works on current systems with minimal changes.
The article explores how AlphaPixel tackles the challenges of forward porting legacy media and code, including recovering data from 1980s and 1990s hardware. Through a real-life example involving an Amiga-based raytracer project, it highlights the complexities of resurrecting old code, using modern tools to extract and convert outdated formats for modern use.
The article explains how to implement a simple vision-based navigation system using OpenCV and Python. It demonstrates a toy example where position metadata is encoded into floor colors, with red and green channels representing X and Y coordinates. Through code examples, the article walks through creating this artificial environment, capturing data with a camera, and processing it to estimate position, orientation, and more advanced navigation techniques.
Vision Navigation is a blanket term that describes using optical-like raster input devices to discover your position and orientation (“pose”) and then use that position to plan, execute and monitor changes in position/pose within the environment. This whitepaper introduces the techniques, strengths and weaknesses, applications and processes of computer vision navigation.