Good with a keyboard and fluent in gibberish

Ok, so last time I said there would be some designs in this post. That was a complete fabrication. (Or maybe an enhancement of the truth?)

I’ve been thinking a lot about harder problems: 1: I love Web Components. 2: Polymer has all the compatibility of car components (ie it only runs on Chrome).

So I’m leaning more towards Mozilla’s X-Tags to do some of the heavy lifting of Custom Elements, use HTML Imports, and leave Shadow DOM out of the picture. The problem with this is that X-Tags does approximately nothing for you with regards to templating, data binding, or anything else you need to do.

For some reason, I also want to move away to jQuery. Vanilla JS has come a long ways since 2004, and maybe if I use less frameworks of questionable utility, the lighter the app will be. To that end, I wrote doubledollar.js, a simple wrapper around some of the pain points of DOM (web components stuff, abbreviations of common operations, AJAX things, some promise stuff, etc). I expect to develop it more as I dig more into Leatherbound.

One the data side, I had a painful reminder that I’m no longer in key-document land and that Google Cloud DataStore (aka BigTable) stores simple schemaless records. So I had to do some actual thinking as to my data store.

What I’m thinking is that an entire post is stored as (sanitized) HTML in a single value in the database. Make sure that user-facing content and searchable text are in the text content of it. We’re already making heavy use of custom tags in front-end, so why not use them in the data store, too? The software can semantically know exactly what happened, widgets can be updated, and the messy implementation details (like the code to embed a video) don’t appear in the DB. Use tags like <lb-event>, <lb-picture>, <lb-tag>, <lb-mention>, etc. The software can easily parse these out for various reports (lists of media, tag clouds, etc).

For indexing, hash-tags and at-mentions would need to be also stored, but I consider these cache columns for the benefit of querying.

My next problem is the posting and editing process. Because of uploading media, posting an entry is not a simple POST request, but several. Clean up of media of deleted posts would need to follow (unless I take the tumblr approach and store everything forever). The alternative is to upload all the media at once, but this sounds like a recipe for bad user experiences (long post time, higher chance of failure, and unusual behavior).

I’ve been using the i3 Window Manager for a little while now on my laptop. For those of you who don’t know, i3 is a tiling window manager somewhere between wmii and awesome.

I personally find it’s minimal enough to be quite nice on my 15" laptop, but still implements useful things like notifications, systray, and a normal way to log out.

My customizations are available on my github with config file an support scripts. The big things it does is set up i3lock (with a fuzzed background, thanks to Image Magick), starts a bunch of baseline stuff, and fixes the Chrome App List.

So keyboard-based window management is here to stay for me. Good luck getting me on vim or emacs, though.

I’m working on a new project which I thought I’d turn into a series. It’s nothing fancy or sophisticated. The code name is Leatherbound, and it’s a diary/journal web app. This first post I just want to lay out the premise and some of the requirements and guidelines.

  • Easy to use
  • Responsive design

Some of the decisions I’ve made:

  • Using Material design: Because why not? At least it’s not more bootstrap)
  • Google AppEngine: scalable and free limits
  • Google Cloud Storage: I think NoSQL will work better than SQL
  • Python: Obviously
  • Google Login
  • Using Polymer (although that might not be mobile friendly? May ditch it)

Some features:

  • Optionally include events that have occurred since the last post (from Google Calendar)
  • Full WYSIWYG (I’m thinking inspired by hallo.js, but probably re-implemented for material)
  • Embedding media, including photos

Some other random details:

  • First screen after login is the new entry screen, and it should load fast. I figure its the #1 thing people do when they login.
  • May look into Ember? The initial load time is a concern against the previous requirement and Polymer’s already slow loading.

Next time, I’ll be showing the prototype interface I’ve got going so far.

(See all the posts with the #leatherbound tag.)

Seeing as I’m a maker, I’d have two ideas how to do this:

  1. Tear apart an off-the-shelf voice changer and exchange the speaker or other components for your own to get the behavior you need.
  2. Build your own voice changer

Either way, check what others have done (Hack A Day).

Keep in mind good speakers just take space. You can’t avoid this.

I thought I would formally announce that I have changed companies. I’m now in a position of management in a software as a service company. If you want specifics, I’m sure you can find out.

I know I haven’t been posting much, but this is Important. Gnome needs help.

Even if you don’t personally use the Gnome desktop environment, you have certainly benefited from their work: GTK (used by Gimp, Pidgin, and others), practically every Linux distro has something they developed or promoted (gconf->dconf, D-Bus, etc).

Please donate!

Allow me to summarize my experience with patent searches:

  1. Try to come up with as many ways to describe various aspects of your product
  2. Search Google for these descriptions
  3. Freak out when none of them resemble what you’re doing.

To future employers: If you like me and want to keep me, do this.

I don’t have an experience with this, but it resonates true with me. Not a great metric, I know, but it’s often the best I got.

If you know better, leave a comment below!

This has 0 technical content. But it’s an important aspect of people. I guess it’s just a reminder that everyone has inner demons and struggles with something. Nobody is all strong all the time.

I have an idea for how to process the raw image:

  1. Use the calibration information in each photo to make a model of the lenses
  2. Create a normal map of the raw image show where each ray came from
  3. Create a color map of the raw image for future demosaicing
  4. Project rays from the modeled CCD and ray trace it to your selected focal plane
  5. Apply some filter to demosaic and rasterize

I still need to process for a depth map. Maybe look for where rays converge?