|8.1.2010||The year I started blogging (blogware)|
|9.1.2010||Linux initramfs with iSCSI and bonding support for PXE booting|
|9.1.2010||Using manually tweaked PTX assembly in your CUDA 2 program|
|9.1.2010||OpenCL autoconf m4 macro|
|9.1.2010||Mandelbrot with MPI|
|10.1.2010||Using dynamic libraries for modular client threads|
|11.1.2010||Creating an OpenGL 3 context with GLX|
|11.1.2010||Creating a double buffered X window with the DBE X extension|
|12.1.2010||A simple random file read benchmark|
|14.12.2011||Change local passwords via RoundCube safer|
|5.1.2012||Multi-GPU CUDA stress test|
|6.1.2012||CUDA (Driver API) + nvcc autoconf macro|
|29.5.2012||CUDA (or OpenGL) video capture in Linux|
|31.7.2012||GPGPU abstraction framework (CUDA/OpenCL + OpenGL)|
|7.8.2012||OpenGL (4.3) compute shader example|
|10.12.2012||GPGPU face-off: K20 vs 7970 vs GTX680 vs M2050 vs GTX580|
|4.8.2013||DAViCal with Windows Phone 8 GDR2|
|5.5.2015||Sample pattern generator|
Video capturing using CUDA.. that sounds a bit odd, doesn't it? Well, the motivation for me was this: Many graphics algorithms I develop rely on GPGPU and the rendering result is first available in a CUDA buffer. Also, video capturing that does not slow down the actual application is not a trivial task, and CUDA offers nicely explicit async transfer modes to batch transfers off the GPU as much in the background as possible.
This can be used in conjunction with an entirely OpenGL engine as well, if you're willing to accept CUDA as the dependency: Simply pass your OpenGL render target into CUDA using the CUDA OpenGL interoperability API, and feed the mapped CUDA buffer as the input to this video capturer. The overhead of mapping the render target in CUDA should be miniscule considering the whole task.
So the key idea is as follows
The video capturer is used like this
Public domain. Enjoy!