Good with a keyboard and fluent in gibberish

I know I haven’t been posting much, but this is Important. Gnome needs help.

Even if you don’t personally use the Gnome desktop environment, you have certainly benefited from their work: GTK (used by Gimp, Pidgin, and others), practically every Linux distro has something they developed or promoted (gconf->dconf, D-Bus, etc).

Please donate!

Allow me to summarize my experience with patent searches:

  1. Try to come up with as many ways to describe various aspects of your product
  2. Search Google for these descriptions
  3. Freak out when none of them resemble what you’re doing.

To future employers: If you like me and want to keep me, do this.

I don’t have an experience with this, but it resonates true with me. Not a great metric, I know, but it’s often the best I got.

If you know better, leave a comment below!

This has 0 technical content. But it’s an important aspect of people. I guess it’s just a reminder that everyone has inner demons and struggles with something. Nobody is all strong all the time.

I have an idea for how to process the raw image:

  1. Use the calibration information in each photo to make a model of the lenses
  2. Create a normal map of the raw image show where each ray came from
  3. Create a color map of the raw image for future demosaicing
  4. Project rays from the modeled CCD and ray trace it to your selected focal plane
  5. Apply some filter to demosaic and rasterize

I still need to process for a depth map. Maybe look for where rays converge?

Tonight, I shall be giving a talk at the local Python Users Group on metaclasses and related. If there are any questions, feel free to ask them here!

So I have the raw database from the Mac version of the Lytro software. Each “image” is a directory with dm.lfp, raw.lfp, stack.lfp, stacklq.lfp, and thumbnail.jpg

The first thing I did was write a script to make galleries from the albums because UUIDs are incredibly unhelpful.

Looking at the other files, you can use lfpsplitter to get the sections out of LFP files. (The format has been reversed, but I’m not writing any software yet.)

I looked over everything. stack.lfp contains some H264 images, but nothing terribly useful. (Nothing like the jpegs lfpsplitter was made for.) However, raw.lfp contains the raw pixel array from the CCD. And raw2tiff can read this. So you just run:

raw2tiff -w 3280 -l 3280 -d short raw_imageRef0.raw raw.tiff

And you get what I’m calling the “bug-eye” view. It’s still needs to be demosaiced  (the process of turning raw CCD pixels into RGB pixels) and the bugeye thing is a problem (microlens problems).

But this solves one of the problems: turning the raw file off the camera into a usable array of pixels. Too bad I don’t know how to get them off the camera or what to do with the pixels after I have them.

I’d like to write my own photo manager for my Lytro camera. My basic problem is Linux support.

I’m not sure what platform to use. HTML5 (in the way of CouchDB + Chrome Extension) is appealing. Easy-to-build interfaces, computer independence, etc.

On the other hand, all the binary files & processing means that a native app might be easier to write. In particular, dealing with dynamic display of “living photos” (stuff with refocusing and other light-field effects).

In any case, having to research and implement the algorithms involved sound like the opposite of fun.

Just watch the Lytro tag for updates on this.

This is how I learned KiCad, the electronics design package. It’s great if you already have some idea about things like electronics and schematics and just need to learn the tools.

Since I’ve watched this, he’s released his whole KiCad course for free. I haven’t really looked into it, since I’ve got other stuff going on, but it might be helpful for you.

At work, we have a big display (that will be) mounted on the wall and driven by a Beaglebone Black. We thought it’d be cool to be able to show each other what we’re doing by throwing it up on the screen.

After some research, I decided to use x11vnc. I thought I’d have to SSH in to the Beaglebone and set up a port forward. But then I read the man page for x11vnc and discovered VNC reverse connections.

A VNC reverse connection is when the VNC client (the one displaying) listens and the VNC server (the one with the applications) connects to it. Using this, I could create a simple daemon with off-the-shelf programs and the tiniest of shell scripts.

Display

The Beaglebone Black already has an .xsessionrc on its user, so you just configure it to autostart X and then add xvncviewer -listen 0 -fullscreen & to the end. Easy peasy.

As a bonus, we set up a hostname and mDNS on the Beaglebone so that we don’t have to deal with static IPs, servers, DNS entries, etc.

Clients

The client script is a bit more involved just because I went a little overboard on options to make sure weirdness didn’t happen. We install this script on each machine we want to connect to the display.

Due to our individualized configurations, we have different options to send to x11vnc.

  • I have two displays, so I use -clip xinerama0 and -clip xinerama1 to switch between them.
  • My coworker has a display of a higher resolution than the display mounted, so he uses -scale 3/4 to

Does it work?

It works well! Videos don’t play well and there’s a lot of tearing, but it works. Multiple users work fine, too; they just stack up in connection order, so the newest one is on top.

In fact, the only changes I want to make are things like removal of the desktop environment on the display and the set-up of a default application.