Projects - How To Make Time-Lapse Videos of 3D Printing(or other stuff) using a Ridiculously Cheap Action Camera?

It has been several years since I made any time-lapse videos from my 3D printers. I wanted to start making them again but my printer setup is totally different now. After thinking through options for a while, I decided I wanted to try using an action camera (ie, a "GoPro"...or for those of us on a budget, "FauxPros...") mounted to the print bed. I don't have a budget line for "something that stares at my printer 24/7" so I did a lot of research looking for the cheapest camera with the quality and features I knew I would need, without the geniune GoPro price tag (typically around $300. Just yikes.) Full disclosure: I learned a lot and basically set up something that does what I need, but after working with it a while, I'm looking for better solution that will probably require a nicer camera eventually. Until then, I guess I'll stick with the approach I worked out here. 

As I said, this started because I wanted a camera to record 3D prints, but this information may be useful to anyone who is trying to make time-lapse videos of ANYTHING. If you don't care about time-lapse videos, if your camera makes videos with fish eye distortion, read on because this info might be useful to you. Even if you have an authentic GoPro with all the bells and whistles, this could open the door for you to apply other filters and video effects to your videos easily and semi-automatically. 

Here's a little more context : There are a number of techniques and mounts that Printers (as in, 3D printing people) use, but I decided to try using an action camera mounted to the printer bed. Based on my research, a good camera that should handle most of what I needed would probably fall in the $60-$100 range. While I was searching, Wal-Mart ran a flash sale on the ApeMan A77 action camera so I managed to get one for under $30. It has pretty good reviews (except by people who think they’re buying a $300 GoPro for $30…come on…), a lot of accessories, a second battery. the output seemed good enough, etc... Feature wise, it advertised that it could take photos, record video, had image stabilization (which turned out not to matter) and can do time-lapse photography which were the main things I knew I might need. It wasn't perfect but it was super cheap and seemed to check the most important boxes. 

I got the camera and played with it for a little while. The marketing descriptions weren’t wrong, but they didn’t paint the whole picture, and some issues quickly became obvious:

1) Firstly, the camera does NOT generate time-lapse videos(what I was expecting). Rather, it takes time-lapse photos. In simple terms, basically the camera has a selfie-timer, and you can turn it on infinite repeat, so it will take a photo every 2, 3, 5, 10, 20, 30, or 60 seconds until you tell it to stop. When you’re done, you’re left with a folder of hundreds of photos, rather than a single video file. 


2) Also it seems most cheap action cameras use a wide-angle lens as a "feature." That’s good because they can generate a 170 degree field of vision, showing almost everything in front of the camera. The negative side effect is the fish-eye lens effect. This would be really noticeable with my 3d prints that are generally pretty close to the camera. If the videos are too distorted and people can't tell what I'm printing, this is all for nothing. 


3) Finally, the battery life of the camera is good (relative to similar cameras) at 90 minutes, but us Printers know that 90 minutes won’t get you through a print of any decent size. To record a whole print, the camera needs to run anywhere from 4-30 hours on average for me. The second battery is a nice feature, but I'm not going to stop my print every 90 minutes to swap batteries.

It’s worth saying that at this point, many people would probably realize this camera isn’t a great fit for the purpose and would return it and look for a friendlier option. There’s certainly nothing wrong with that wisdom, but for me, things like this become a puzzle that I just want to solve. I knew none of those issues was insurmountable, so I decided to see what it would take to get what I wanted from it. You never know when going the extra mile to solve an unnecessary problem today may give you tools you need to solve a seemingly impossible problem later.

Fixing the Power Shortage

I'll start with the simplest issue to solve: powering the camera. The camera has a micro-USB port and after testing I confirmed that the camera can be powered externally through through the port while it’s taking pictures. The bad news is that trying to power from an "intelligent" USB source like a PC/Raspberry Pi can confuse the camera into thinking it needs to be running in webcam mode. On the one hand, webcam mode is one more cool feature this cheap guy boasts, buuuut when it's in that mode it won't let you access it's other camera modes, like timed photographs. So the simple solution was using an external power pack (as long as it had enough power to last) or a usb wall charger. If you leave the main camera battery in, then it's not a problem to connect different power packs while everything is running. 

Making a Movie

Now for the headier stuff...turning a bunch of individual still photos suffereing from lens distortion, into a single well-perspectived video. Most commercial video editing software can handle these issues easily, but most of them are interactive window-based software, which can be difficult to automate. Also, I wanted to use open-source software so that I could share what i learn with others, without being dependent on them raising the money for a paid product or looking for a hard-to-find product in stores. I don’t know a ton about open source video editors, but I’ve come across FFmpeg a number of times and it seemed like a really good fit here. It’s open source (free and community-supported), it can be run from the command line, and it has a wide selection of features, filters, codecs, and other video goodies that honestly go way beyond what I can actually appreciate. (If you come across this article and have a lot of experience with FFmpeg, and if you can give me any pointers to improve what I’ve done, I’d love your input.) Here's how I managed to solve the problem(s) at hand. 

I confirmed FFmpeg should be able to compile the photos and fix the distortion, which means I could break down the big process into a handful of steps that can be scripted. This was my general approach for the script design:

1) Copy image files into a local directory on my computer
2) Run a process to compile a video from the images
3) Run a process to remove the fish-eye distortion
4) Save the video as an mp4 (or other widely compatible format) in the same directory

Making a Movie - Preparing the Files

Shortly after I started putting something together and testing it, I discovered that the Windows version of FFmpeg has a shortcoming and can’t use the "pattern_type glob" input method, (which would supposedly have made it easier to read in a set of image files with semi- arbitrary names, in the right order). The A77 filenames include the timestamp as part of the name to help keep the images sorted, but that naming structure confuses FFmpeg. With my severely limited experience with FFmpeg, it looked like the simplest option available would be to add another step to my script that reads in all the source files and maintains their order, but renames them with sequential 5 digit numbers, and ending in ".jpg". (00001.jpg, 00002.jpg, ...) FFmpeg has no trouble with this arrangement so I just had to do the renaming. 

There are different ways to solve this, but I chose to write a quick python script because A) I already have python installed, and B) I never remember basic DOS scripting commands. In truth, doing the renaming within the main script would be more elegant, but I'm a pragmatist, this works, and no one was paying me to do it a better way so...good enough. Here's the basic Python script I wrote for the renaming:

#rename_images.py
import os
import shutil

def main():
        i = 1
        for file in [f for f in os.listdir() if f.endswith(".JPG")]:
                shutil.copy2(file, str(i).rjust(5, '0') + ".jpg")
                i = i + 1

if __name__ == "__main__":
        main()

 

If you have the Python language installed, and you use this into a script, and save the script in a directory with a bunch of jpeg files ending in .JPG (case sensitive) it will copy and rename all the JPG files to sequential 5 digit numbers ending with ".jpg". This script ain't elegant, and copying it into the directory with the images is a bad long-term solution, but we're still basically prototyping so I'm cutting myself some slack.

Making a Movie - It's Like a Digital Flipbook

At this point, you're going to need FFmpeg to actually do the video processing. It's not hard to install. It is complicated and full of cryptic options, but the error messages are pretty helpful. Stay close to what I show you and google any errors you get and you can eventually straighten out most issues. I did this on Windows. FFmpeg commands should be similar on other platforms, but your mileage may vary. I'm assuming a little bit of familiarity with video editing lingo, but ask if you want something explained.

After installing, open up a command prompt window and navigate to the directory where you copied and renamed your image files. The next step is actually calling FFMPEG to compile those photos into a video. Based on some general research, here’s the command I used (I’ll explain important parts as I go): 

ffmpeg -r 10 -i “%05d.jpg” -vcodec yuvj422p my_video.mp4

-r 10 : set framerate to 10 frames per second. Higher numbers will lead to smoother timelapse videos that run in less time. When converting images to video in FFmpeg, every image your camera takes will be one frame in your video, and the video file shows them in quick succession, just like a flipbook animation. To calculate the length of the generated video in seconds, take the number of source images in your directory (which will be the highest numbered image file) and divide that by the frame rate. Towards the end, I'll share my formulas for calculating frame rate and setting the timer interval on your camera to the ideal length. Generally, I like a framerate between 20 and 30 for a smooth video.

-i “%05d.jpg” : this is an instruction to tell FFmpeg that we will be supplying it with a series of files that are 5 digits long and end in .jpg. Unless you do exactly what I did and rename your image files to be 5-digit sequential numbers ending in .jpg, this will probably not work for you. You’ll either need to research and change this to match your files, or you’ll need to rename yours. (I’m hoping someone tells me a better way to do this on Windows). 

-pix_fmt yuvj420p -c:v libx264 : uuuh, this is basically magic gibberish. We’re setting details of the video file with some important information, but if you’re not an expert in that stuff, it’s pretty technical and not really necessary to understand. Try this, and if it doesn’t work, try something else. 

-vf scale=1440x1080,setdar=4/3 : vf calls a filter on the video as it is being created…this filter will set the video resolution to 1440x1080, and specify that the aspect ratio is 4:3. Incidentally, the lowest photo resolution from the camera (2304x1728) is way larger than needed for the kinds of videos I want to make (web videos and maybe youtube). By default, the video will be created at the resolution of the source images, so scaling the video down to the specific resolution you need GREATLY reduces the final size of the video. This resolution is the right size for me and keeps the original aspect ratio of photos, but you can tweak as appropriate

my_video.mp4 : this is the clever output file name I came up with. When(if) the video finishes processing, this file will appear in the directory. 

Assuming everything is set up properly, run this command and you'll be rewarded with a video file for your trouble. 

Making a Movie - De-Fish-Eyeing the Video

The next step is lens correction. To correct the fish-eye effect, all we have to do is run our video through FFmpeg again, this time with a special filter that can basically increase or decrease the fish eye effect. I got most of my information about this from here:

(https://www.cloudacm.com/?p=3068) DeFish Effect with FFmpeg

The call that works well for my camera is: 

ffmpeg -i my_video.mp4 -vf "lenscorrection=cx=0.5:cy=0.5:k1=-0.075:k2=-0.075" my_video2.mp4

here’s the rundown of the options:

-i my_video.mp4 : remember the -i on the last command? This time we’re telling FFmpeg to get it’s input from the output file we just created in the last step, my_video.mp4

-vf "lenscorrection=cx=0.5:cy=0.5:k1=-0.075:k2=-0.075" : We are applying the "lenscorrection" filter to the video, with the properties that follow it:

cx and cy represent the center of the lens distortion on the image. This is usually going to be the center of the image. The location is measured as a number between 0 and 1 (for the min and max axes), so 0.5 for both cx and cy is dead center. 

k1=-0.075,k2=-0.075 : This goes beyond what I can explain. Basically these are values in the correction formula that affect how the distortion is applied. I suggest starting with these numbers, and if it makes things worse or doesn’t help enough, then add or subtract (starting with small increments like 0.025) and see how the final videos are affected. Trial and error seems cheesy, but trying to calculate perfect values here will probably require you getting specs for your specific camera that you will never be able to find. 

my_video2.mp4 : another clever name for the final output file that will be generated. 

You may have noticed this command is shorter and simpler than the previous command that created the video in the first place. This is because once we set the resolution, pixel format, and it gets saved into the video file, then when FFmpeg reads that file later it can pull those values and use them as the defaults. 

Combining the FFmpeg Calls

It’s actually possible to combine the 2 FFmpeg calls into one call that does the whole process of reading in sequentially numbered photos from a directory, resizing, and applying the lenscorrection and saving to an output file. The combination of both calls looks something like this. 

ffmpeg -r 10 -i "%05d.jpg"  -pix_fmt yuvj420p -c:v libx264 -vf “scale=1440x1080,setdar=4/3,lenscorrection=cx=0.5:cy=0.5:k1=-0.075:k2=-0.075” my_video.mp4

(Note : This command should be entered on one line, in case your browser is splitting into 2 or more.)

Why didn’t I give this to you first? I learned there is value in separating the processes while you’re debugging your settings. The more filters ffmpeg is applying, the longer it will take to complete. It’s also possible that some people reading this only want one of the steps…maybe their camera takes time-lapse photos without distortion. Or maybe it makes time lapse videos but still has distortion issues. Whatever your case, you should be able to take the command you need and get it to work with a little tweaking. 

Figuring out Settings for Framerate, Camera Timer Duration, and Length of the Final Video

As I mentioned earlier, setting the framerate value efficiently takes some foresight and planning. By "efficiently", I mean taking the minimum amount of pictures taken to get 100% of the quality you want in the final video. To figure this out, you need to know how long you want the time-lapse video to run, how long your time lapse event will take to complete, and how smooth you want the video animation to be. With that information you can choose your framerate, and calculate the most efficient timer setting for your camera when taking the time-lapse.

Length of the Output video (in seconds) x framerate = Number of images needed 

Length of the print in seconds / # of images needed = ideal time between photos

Here's a scenario. Say I have a 3D print that will take 14 hours(16 x 60 x 60 = 50,400 seconds). I want a very smooth video so I want 24 frames per second. And I want the timelapse video to last 30 seconds. 

(Final video will be about 60 seconds long) x (24 frames per second) = 1,440 frames needed

length of print is 50,400 seconds / 1,440 frames = 35 seconds per frame

Looking at my camera, the closest timer settings I have to 35 seconds are 30 seconds and 60 seconds. FFmpeg can stretch the video length using other filters, but ignoring those for now and using the default "1 frame = 1 image" math, If we can live with a shorter video, we could set the timer to 60 seconds (the final video would be (50,400 / 60) = 840 frames, which at 24 fps is 35 seconds long. Or we could set the timer to 30 seconds and the final video would be (50,400 / 30 = 1680 frames, or 70 seconds total @ 24fps. We could also tweak the framerate up or down a little to adjust the length of the final video. 

Conclusion

If I was to buy another camera to do time-lapse videos, I would probably look for one beside the ApeMan A77. Don't misunderstand, the A77 seems to be a great buy, compared to other action cameras of its humble class. But it just has too many quirks to be a great tool for 3D printing time lapse videos. I want a camera that so I can just pull a single perfect video file off the camera and use it. But If you already have a camera with limits and you’ve been trying to squeeze a little more usefulness out of it, then FFmpeg might be a tool you can use to clean up your videos. As repeatedly stated, I’m not really an expert in FFmpeg or cameras in general so I welcome any help pointers, tips, or corrections from experienced users that would help me or anyone else reading. And let me know if you have any questions on the parts I glossed over, like factors I looked for in a camera, different 3D printer time-lapse setups, etc...and I'd be happy to share my thoughts. 

Little Plastic Things ©Daniel Benton All Rights Reserved