jQuery UI Draggable Table Rows Gotchas

I ran into a few Gotchas while dragging table rows. One was an easy fix, the other took some guesswork.

Can’t drag individual TR elements

If you set a “TR“ element to “draggable()“, it will not work because we don’t know how to handle a moving TR element (TRs must be in a table to render properly).

To fix this, wrap the TR in a table temporarily while dragging via the helper option.

            $(".tr_row").draggable({
                helper: function(event) {
                    return $('<div class="drag-row"><table></table></div>')
  .find('table').append($(event.target).closest('tr').clone()).end();
                },
            });

Sortable / Droppable doesn’t work on TR elements, or TR elements disappear during drag

The second problem I ran into was a little more obscure. Your droppable and sortable selectors MUST be applied to a “<tbody>“ tag.

I noticed that when I had the normal “<table><tr><td>“ structure and dragged, jQuery created a <tbody> element to contain the rows.

 $("#my_selector tbody").droppable().draggable();

Python Unicode Graceful Degradation to ASCII

Unicode problems have been one of the harder issues to deal with as external libraries, hardware like label printers and such sometimes don’t support it and throw nasty errors or worse: mysterious silent bugs.

I’ve continually found better ways to deal with these strings. Here’s my journey:

Quick, dirty, and destructive list comprehension

One solution I used while in the shell was just to make sure the “ord(char)“ is below 128.

This method was destructive, but it was acceptable to me given the situation.

unicode_string = u'Österreich'
dirty_fix = ''.join([x for x in unicode_string if ord(x) < 128])

Built in string method encode

Next up I learned about the “encode“ method on a string. It encodes a string to a given encoding, but the important part is the second argument “errors“ which you can pass as a parameter “ignore” or “replace”.

unicode_string = u'Österreich'
unicode_string.encode('ASCII', 'ignore')
# out: 'sterreich'

unicode_string.encode('ASCII', 'replace')
# out: '?sterreich'

Graceful degradation with python standard library unicodedata

The best solution thus far I’ve found is the standard library “unicodedata“ which allows latin unicode characters to degrade gracefully into ASCII.

The library contains a function “normalize“ which is described as follows:

Return the normal form form for the Unicode string unistr. Valid values for form are ‘NFC’, ‘NFKC’, ‘NFD’, and ‘NFKD’.

…snip…

The normal form KD (NFKD) will apply the compatibility decomposition, i.e. replace all compatibility characters with their equivalents. The normal form KC (NFKC) first applies the compatibility decomposition, followed by the canonical composition.

…snip…

The short version: if you use NFD or NFKD, the function converts each unicode character into its “Normal form D“ known as canonical decomposition.

A character may have a similar letter expressed in ASCII such as “Ö“ –> “O“

unicode_string = u'Österreich'
unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore')
# out: 'Osterreich'

This is great for us as .01% of data has these unicode characters and human readability is all that matters.

Linode backups failed with no status message

I’ve moved our backups from a cron job to linode’s full disk backup service.

Their backup service docs explain that if you’ve created any partitions using fdisk the backups won’t work, but don’t mention that backups will also fail if you’ve set a finnix recovery partition on your server. This message really should be added to the failure status message. “Cannot back up filesystem type X” would be useful.

Remove your finnix iso from your configuration profile and it should work!

Gmail jQuery Selectors

I’m just going to start listing selectors I find for various elements in Gmail.

Sure they might change, but not very often.

$(“#canvas_frame”).contents().find(….)

The subject: .hP
Email addresses in thread: span.gD
Email thread text: .Bu:first
Sidebar: .Bu:last

What is shmmax, shmall, shmmni? Shared Memory Max

This mysterious setting has been explained pretty well here: and am reposting my findings since last time I dealt with these settings, I just solved my problem and got the hell out of there.

It’s actually more a post to a few other posts.

What are these things?

shmmax appears to be a setting that sets the maximum size of memory allowed to be reserved by a process
http://www.csl.mtu.edu/cs4411/www/NOTES/process/shm/what-is-shm.html

shmall appears to be the maximum available memory to use as shared memory.
You can check yours by typing ipcs -lm or cat /proc/sys/kernel/shmall/

I’m seeing oracle documentation suggesting 50-75% of max ram, but my shmall is set pretty low assuming it’s in bytes like the other shm settings (2097152). Update: this value is multiplied by your page size which ends up being like 8gb.

How can I see what segments are currently being used?

Thanks to this post: http://www.unixbod.com/kb/how-to-change-shared-memory-and-semaphore-settings-in-linux/

It seems I can type:

$> ipcs -m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x0052e2c1 98304      postgres   600        108486656  4    

Finally, I can see how much my postgres is claiming based on my configuration!

These are baby steps towards understanding these settings…

I have questions:

1. Is all of shared memory shared across processes? Or is a single segment block completely reserved for a process? The name implies sharing… then what happens as we reach the maximum and multiple processes have shared_memory settings? I assume it doesn’t actually /consume/ memory, and each process can use up TO the amount available via your shared memory setting.

2. Do we start swapping into virtual memory if shmmall is too low even if we have ram?

It sounds like shared memory is a certain subset of memory the system can allocate via the shmmall parameter, and that each process can consume up to the amount specified in its config files. So why have a limit on shmmax? A safeguard against individual process settings with huge default shared memory reservations?

Postgresql — could not create shared memory segment: Invalid argument (shmget)

If your new postgresql settings are causing your postgres server to not start with this error, it helpfully reminds you that it’s probably due to the kernel’s SHMMAX parameter being too low.

* Restarting PostgreSQL 8.4 database server
* The PostgreSQL server failed to start. Please check the log output:
2011-11-04 05:06:26 UTC FATAL: could not create shared memory segment: Invalid
argument
2011-11-04 05:06:26 UTC DETAIL: Failed system call was shmget(key=5432001, size
=161849344, 03600).
2011-11-04 05:06:26 UTC HINT: This error usually means that PostgreSQL’s reques
t for a shared memory segment exceeded your kernel’s SHMMAX parameter. You can
either reduce the request size or reconfigure the kernel with larger SHMMAX. To
reduce the request size (currently 161849344 bytes), reduce PostgreSQL’s shared
_buffers parameter (currently 19200) and/or its max_connections parameter (curre
ntly 53).
If the request size is already small, it’s possible that it is less than
your kernel’s SHMMIN parameter, in which case raising the request size or recon
figuring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memo
ry configuration.
…fail!

On my ubuntu system, the shmmax was 32 mb or something like that.

Check your systems shmmax

$> sudo cat /proc/sys/kernel/shmmax/
33554432

Set a new shmmax value

As usual, I blog about this stuff when it breaks, so when I saw this, I just searched yuji shmmax.

$> sysctl -w kernel.shmmax=BYTES

Make shmmax persist across a reboot

The setting clearly didn’t persist when I last rebooted a few minutes ago and caused my postgres to fail.

$> sudo vim /etc/sysctl.conf
# add kernel.shmmax = bytes
kernel.shmmax = 134217728

Django south data migration – resolving a model that have the same names

I have multiple apps with the same model names and am doing a migration between them.

X.models.Order
Y.models.Order

Resolve them via string notation in south.

orm[‘X.Order’]

PS: Note that when creating the blank data migration, you will need to pass in the `–freeze` option to add the model that isn’t in the app the migration is created from.

For example…

python manage.py datamigration appX migrating_appY_data_to_appX --freeze appY