Apple vs. Samsung: The cost of Android fragmentation.

I’ve commented in the past about how Android fragmentation isn’t really as huge of an issue as some developers have made it to be.  But, there is one elephant in the room that no one is talking about:

Android fragmentation and lack of updates increased damages in the Apple vs. Samsung lawsuit.

How?  Why?  Well, the sad truth of it is that Android is evolving over time to be “less infringing” on Apple patents.  One great example of this is the “bounce-back scrolling feature” patent.  This feature did not exist in Android 1.x, was implemented (poorly, I might add) in Android 2.x, and was then removed in Android 3.x and 4.x and replaced with a non-infringing color-overlay scrolling feedback mechanism.

So, if Samsung (or any other vendor) had been able to keep devices up to date more quickly, they would have been less liable to Apple for damages.  Similarly, if they had been quicker to adopt new Android versions (say, 4.0) then they would not be liable for any damages against this patent.

There are many other examples of how Google is evolving the Android User Interface to NOT infringe on Apple patents .  This is the first case that I can think of where fragmentation (more specifically: Lack of keeping software versions up-to-date) has cost anyone real money.  By the books, it’s Samsung who’s paying, and maybe this means they’ll be more aggressive with keeping up to date.

I also hope Google sees this lesson, and helps the hardware manufacturers with hardware drivers and other issues that hold back software on many devices.

Amazed at how many different Mars gallery interfaces there are.

Here, have this pile of links:

I’m going to keep updating this list as I find more, so check back.  I’ve already added 3 new links since I first wrote this.

Supercharge your bash prompt with git status goodness.

Here’s a thought:

Wouldn’t it be awesome if your bash prompt could show you:

  • Your current working directory.
  • Which git repository you’re currently in.
  • Which git branch you’re currently on (if not master).
  • How many outstanding files you have (files that need to be added or committed).
  • How many changes ahead (or behind) origin/HEAD you currently are.
  • Your current virtualenv (for Python development, but doesn’t hurt other languages)

Well, all this is possible (and more, probably!).  I worked a bit on getting all these features working this afternoon.  The source code is pretty rough, but I think this could be useful enough for others that I should start to share it.  I’ll likely put this in it’s own github repository eventually.  But, for now, here’s a simple gist with my ~/.bash_prompt source.

To use this, just copy it to your home directory, and add the following to the bottom of your ~/.bashrc:

source ~/.bash_prompt

Adding custom launchers to Gnome3′s Favorites.

This is totally non-obvious, so here goes.

At a shell prompt, run:

$ gnome-desktop-item-edit ~/.local/share/applications/mylauncher.desktop –create-new

Go through the dialog to create the launcher and make sure you give it an easy to remember name.  When you’re done, that application should show up under “Applications” in that search thing.

Installing pip dependencies without touching the ‘net.

@jacobian just tweeted:

A little shitty-wifi-inspired hack to make pip install not have to touch the ‘net at all: http://bit.ly/IRrRcn

Yeah, been there, done that (using PIP_DOWNLOAD_CACHE).  It’s a good idea, but pip itself has better support for doing this.  I learned this technique from the pip development team, specifically @carljm over IRC and some bugs.

“pip install –no-install” first

Use an “sdist cache” and not PIP_DOWNLOAD_CACHE.  An “sdist cache” caches the actual distributed files, not the “pip-ified” files from pypi.   Pick a directory to store these sdist files in.  From now on out, I’m going to assume you’re putting them in $SDIST_DIR, wherever you decide that should be.

If you’re adding a new dependency, and you want that dependency to be able to be installed later without touching the ‘net, you need to download it first, and then install it from that download.  For example, if I wanted to include Django, I’d do this:

pip install --no-install --no-input --use-mirrors -I --download=$SDIST_DIR django

Which will put a file named something like Django-1.4.tar.gz (note the nice filename!) into $SDIST_DIR.  You can then put $SDIST_DIR under version control.

“pip install –find-links” second

Then, you can install django (or any other dependency that you’ve previously downloaded) without touching the ‘net by executing:

pip install -I --find-links=file://$SDIST_DIR --no-index --index-url=file:///dev/null django

Use requirements.txt, but not like they taught you

Unfortunately, this technique breaks “pip install -r requirements.txt”.  (I don’t remember the exact details but I do remember it’s broken)  But, the format of requirements.txt is simple enough that you can basically say:

for dependency in $(cat requirements.txt); do
    pip install -I --find-links=file://$SDIST_DIR --no-index --index-url=file:///dev/null $dependency

Just put this into a shell script to make your life easier, which leads us to…

Wrap it all up into a collection of shell scripts

Now that you know the general technique, you’ll need to wrap these two up into a couple different shell scripts.  Here’s what I do (without source — but I’ll share soon).

./add_dependency.sh: Download a new single dependency, per the pip line above, and then immediately install it.  This leaves a file in $SDIST_DIR, but that’s good, because it reminds me (via source control) that I’m out of sync with what everyone else thinks the dependencies are.

./download_all_dependencies.sh: Run “pip freeze” and download every package currently installed into the current virtualenv.  This is good because often times “pip install foo” will download several dependencies, and the ./add_dependency.sh script above doesn’t properly handle those cases.  I think this is a bug in pip.

./install.sh: Take “requirements.txt” and process it line-by-line running the “install but don’t download” commandline from above.

 

Make your tests 7x faster in Django 1.4

In Django 1.4, the default password hasher has been switched to the extremely secure PBKDF2 algorithm.

But, each encrypt and decrypt using PBKDF2 can take a pretty long time (on my system, about 150ms for each hashing, which happens twice per unit test that I’m writing). For your test cases, (that create users and log them in & out, probably) this extra security is probably pointless, and runtime is paramount.

So, create a custom settings.py for your test cases, and set the PASSWORD_HASHERS setting to exclude PBKDF2. You can also use this technique to setup an inmemory sqlite2 database as your backend, which also speeds things up quite a bit. Here’s a snippet from my settings_test.p:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',
        'USER': '',                      # Not used with sqlite3.
        'PASSWORD': '',                  # Not used with sqlite3.
        'HOST': '',                      # Not used with sqlite3.
        'PORT': '',                      # Not used with sqlite3.
    }
}

PASSWORD_HASHERS = (
    # 'django.contrib.auth.hashers.PBKDF2PasswordHasher',
    # 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
    # 'django.contrib.auth.hashers.BCryptPasswordHasher',
    'django.contrib.auth.hashers.SHA1PasswordHasher',
    'django.contrib.auth.hashers.MD5PasswordHasher',
    # 'django.contrib.auth.hashers.CryptPasswordHasher',
)

Note that you have to have the sha1 algorithm enabled to allow the django.contrib.auth tests work. You can use different settings for test either get this by something like:

DJANGO_SETTINGS_MODULE=’yourapp.settings_test’ django-admin.py test

(probably putting this in a Makefile or shell alias) or I’ve also seen people put this at the top of their default settings:

if 'test' in sys.argv:
    from settings_test import *

These 2 changes took my test run time from 15.5 seconds down to 2.0 seconds, an improvement of about 7x! Woot!

Array valued Form fields in Django.

So, you want to pass an array of values into a Form in Django.  It’s not exactly obvious what the right solution is. You could read up on MultiValueField (https://docs.djangoproject.com/en/dev/ref/forms/fields/#multivaluefield) or you could read about widgets.MultipleHiddenInput (https://docs.djangoproject.com/en/dev/ref/forms/widgets/#multiplehiddeninput) but you’ll realize that neither of these allows for custom validation of the individual entries.

Here’s a generic ArrayField that might be of use:

class ArrayField(forms.Field):

    def __init__(self, *args, **kwargs):
        self.base_type = kwargs.pop('base_type')
        self.widget = MultipleHiddenInput
        super(ArrayField, self).__init__(*args, **kwargs)

    def clean(self, value):
        for subvalue in value:
            self.base_type.validate(subvalue)

        return [self.base_type.clean(subvalue) for subvalue in value]

Here’s the code I’m using to unit test this puppy:

class TestCharArrayForm(forms.Form):
    multi_char = ArrayField(base_type=forms.CharField(max_length=3))


class TestArrayField(ExaTestCase):

    def test_array_good(self):
        query_dict = QueryDict('a=1', mutable=True)
        query_dict.setlist('multi_char', ('abc', 'def', 'ghi'))
        test_form = TestCharArrayForm(query_dict)
        self.assertTrue(test_form.is_valid())
        self.assertEqual(test_form.cleaned_data['multi_char'],
                         ['abc', 'def', 'ghi'])

    def test_array_invalid(self):
        query_dict = QueryDict('a=1', mutable=True)
        query_dict.setlist('multi_char', ('abcd' * 10, # too long
                                          'deff' * 10,
                                          '1234' * 10))
        test_form = TestCharArrayForm(query_dict)
        self.assertFalse(test_form.is_valid())

It would be very straightforward to add extra fields on ArrayField to check for number of items in the array or any other characteristics you want.

Adding an array of values to a Django form

It’s possible to have an array-valued field in a Django Form, it’s just really, really not clear how to do it.

Background: I’m writing a series of REST APIs using a Django backend, and I like to define the parameters for POST and PUT as Django Forms objects.  I’m never rendering my Forms as HTML, as they just define the API.

In some cases, I’d like to pass an array of values.  Let’s say, an array of string tags for a blog post in the POST method that creates a blog post.  The form for this API would look like this:

class CreateBlogForm(forms.Form):
    title = forms.CharField(max_length=2000)
    body = forms.CharField()
    tags = forms.CharField(widget=forms.MultipleHiddenInput)

Then, in my View code, I would write a snippet that looked like this:

    blog_form = CreateBlogForm(request.POST)
    if not blog_form.is_valid(): 
        raise Exception("Invalid form")
    blog_data = blog_form.cleaned_data 
    blog = Blog.objects.create(title=blog_data['title'], body=blog_data['body'])
    for tag in blog_data.tags: blog.add_tag(tag)

Note how I’m accessing the tags members as an array? Exactly what I wanted!

Jenkins workspace archiving breaks on symlinks.

Our Jenkins build was working great (archiving one workspace, and then untarring it into another using the Archive for Clone Workspace feature) and then one day it broke.

The error was in the second build job, and says:

	at hudson.model.Run.run(Run.java:1421)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:238)
Caused by: java.io.IOException: Failed to chmod /var/lib/jenkins/jobs/oswebsite_test/workspace/wsve/lib/python2.6/UserDict.py : Operation not permitted
	at hudson.FilePath._chmod(FilePath.java:1248)
	at hudson.FilePath.readFromTar(FilePath.java:1813)
	... 16 more

The issue is Jenkins Bug 13280 which basically says that the “Archive for Clone Workspace” feature is broken if your workspace contains symlinks.  Hopefully that bug will be fixed soon.

My workaround was to just set a “custom workspace directory” for the second job to be the workspace directory of the first job.  Not a clean solution, but it gets things done.

emacs, tramp, ido, dbus and avahi

Yeah, that’s quite a cast of characters isn’t it?

If you’re using emacs, and inside emacs you use ido-mode, then it is, by default, attempting to use “tramp completion” which, by default, is using dbus, which, by default, uses avahi (i.e. zeroconf) to browse your network for shares.

What this means is that if you use ido, and you’re on a “big network” (With lots of avahi/zeroconf/rendezvous hosts) then you’ll see a noticable slowdown in opening files. The solution is:

M-x customize-group ido

and turn off ido-enable-tramp-completion

My .emacs.d/init.el has this line in the custom-set-variables section:

'(ido-enable-tramp-completion nil)

Then, it won’t use tramp, and it won’t use dbus and it won’t use avahi and you’ll be able to swiftly open files again. Whew!