Good with a keyboard and fluent in gibberish

I thought I would formally announce that I have changed companies. I’m now in a position of management in a software as a service company. If you want specifics, I’m sure you can find out.

I know I haven’t been posting much, but this is Important. Gnome needs help.

Even if you don’t personally use the Gnome desktop environment, you have certainly benefited from their work: GTK (used by Gimp, Pidgin, and others), practically every Linux distro has something they developed or promoted (gconf->dconf, D-Bus, etc).

Please donate!

Allow me to summarize my experience with patent searches:

  1. Try to come up with as many ways to describe various aspects of your product
  2. Search Google for these descriptions
  3. Freak out when none of them resemble what you’re doing.

To future employers: If you like me and want to keep me, do this.

I don’t have an experience with this, but it resonates true with me. Not a great metric, I know, but it’s often the best I got.

If you know better, leave a comment below!

This has 0 technical content. But it’s an important aspect of people. I guess it’s just a reminder that everyone has inner demons and struggles with something. Nobody is all strong all the time.

I have an idea for how to process the raw image:

  1. Use the calibration information in each photo to make a model of the lenses
  2. Create a normal map of the raw image show where each ray came from
  3. Create a color map of the raw image for future demosaicing
  4. Project rays from the modeled CCD and ray trace it to your selected focal plane
  5. Apply some filter to demosaic and rasterize

I still need to process for a depth map. Maybe look for where rays converge?

Tonight, I shall be giving a talk at the local Python Users Group on metaclasses and related. If there are any questions, feel free to ask them here!

So I have the raw database from the Mac version of the Lytro software. Each “image” is a directory with dm.lfp, raw.lfp, stack.lfp, stacklq.lfp, and thumbnail.jpg

The first thing I did was write a script to make galleries from the albums because UUIDs are incredibly unhelpful.

Looking at the other files, you can use lfpsplitter to get the sections out of LFP files. (The format has been reversed, but I’m not writing any software yet.)

I looked over everything. stack.lfp contains some H264 images, but nothing terribly useful. (Nothing like the jpegs lfpsplitter was made for.) However, raw.lfp contains the raw pixel array from the CCD. And raw2tiff can read this. So you just run:

raw2tiff -w 3280 -l 3280 -d short raw_imageRef0.raw raw.tiff

And you get what I’m calling the “bug-eye” view. It’s still needs to be demosaiced  (the process of turning raw CCD pixels into RGB pixels) and the bugeye thing is a problem (microlens problems).

But this solves one of the problems: turning the raw file off the camera into a usable array of pixels. Too bad I don’t know how to get them off the camera or what to do with the pixels after I have them.

I’d like to write my own photo manager for my Lytro camera. My basic problem is Linux support.

I’m not sure what platform to use. HTML5 (in the way of CouchDB + Chrome Extension) is appealing. Easy-to-build interfaces, computer independence, etc.

On the other hand, all the binary files & processing means that a native app might be easier to write. In particular, dealing with dynamic display of “living photos” (stuff with refocusing and other light-field effects).

In any case, having to research and implement the algorithms involved sound like the opposite of fun.

Just watch the Lytro tag for updates on this.

This is how I learned KiCad, the electronics design package. It’s great if you already have some idea about things like electronics and schematics and just need to learn the tools.

Since I’ve watched this, he’s released his whole KiCad course for free. I haven’t really looked into it, since I’ve got other stuff going on, but it might be helpful for you.