Linode backups failed with no status message

I’ve moved our backups from a cron job to linode’s full disk backup service.

Their backup service docs explain that if you’ve created any partitions using fdisk the backups won’t work, but don’t mention that backups will also fail if you’ve set a finnix recovery partition on your server. This message really should be added to the failure status message. “Cannot back up filesystem type X” would be useful.

Remove your finnix iso from your configuration profile and it should work!

Gmail jQuery Selectors

I’m just going to start listing selectors I find for various elements in Gmail.

Sure they might change, but not very often.

$(“#canvas_frame”).contents().find(….)

The subject: .hP
Email addresses in thread: span.gD
Email thread text: .Bu:first
Sidebar: .Bu:last

What is shmmax, shmall, shmmni? Shared Memory Max

This mysterious setting has been explained pretty well here: and am reposting my findings since last time I dealt with these settings, I just solved my problem and got the hell out of there.

It’s actually more a post to a few other posts.

What are these things?

shmmax appears to be a setting that sets the maximum size of memory allowed to be reserved by a process
http://www.csl.mtu.edu/cs4411/www/NOTES/process/shm/what-is-shm.html

shmall appears to be the maximum available memory to use as shared memory.
You can check yours by typing ipcs -lm or cat /proc/sys/kernel/shmall/

I’m seeing oracle documentation suggesting 50-75% of max ram, but my shmall is set pretty low assuming it’s in bytes like the other shm settings (2097152). Update: this value is multiplied by your page size which ends up being like 8gb.

How can I see what segments are currently being used?

Thanks to this post: http://www.unixbod.com/kb/how-to-change-shared-memory-and-semaphore-settings-in-linux/

It seems I can type:

$> ipcs -m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x0052e2c1 98304      postgres   600        108486656  4    

Finally, I can see how much my postgres is claiming based on my configuration!

These are baby steps towards understanding these settings…

I have questions:

1. Is all of shared memory shared across processes? Or is a single segment block completely reserved for a process? The name implies sharing… then what happens as we reach the maximum and multiple processes have shared_memory settings? I assume it doesn’t actually /consume/ memory, and each process can use up TO the amount available via your shared memory setting.

2. Do we start swapping into virtual memory if shmmall is too low even if we have ram?

It sounds like shared memory is a certain subset of memory the system can allocate via the shmmall parameter, and that each process can consume up to the amount specified in its config files. So why have a limit on shmmax? A safeguard against individual process settings with huge default shared memory reservations?

Postgresql — could not create shared memory segment: Invalid argument (shmget)

If your new postgresql settings are causing your postgres server to not start with this error, it helpfully reminds you that it’s probably due to the kernel’s SHMMAX parameter being too low.

* Restarting PostgreSQL 8.4 database server
* The PostgreSQL server failed to start. Please check the log output:
2011-11-04 05:06:26 UTC FATAL: could not create shared memory segment: Invalid
argument
2011-11-04 05:06:26 UTC DETAIL: Failed system call was shmget(key=5432001, size
=161849344, 03600).
2011-11-04 05:06:26 UTC HINT: This error usually means that PostgreSQL’s reques
t for a shared memory segment exceeded your kernel’s SHMMAX parameter. You can
either reduce the request size or reconfigure the kernel with larger SHMMAX. To
reduce the request size (currently 161849344 bytes), reduce PostgreSQL’s shared
_buffers parameter (currently 19200) and/or its max_connections parameter (curre
ntly 53).
If the request size is already small, it’s possible that it is less than
your kernel’s SHMMIN parameter, in which case raising the request size or recon
figuring SHMMIN is called for.
The PostgreSQL documentation contains more information about shared memo
ry configuration.
…fail!

On my ubuntu system, the shmmax was 32 mb or something like that.

Check your systems shmmax

$> sudo cat /proc/sys/kernel/shmmax/
33554432

Set a new shmmax value

As usual, I blog about this stuff when it breaks, so when I saw this, I just searched yuji shmmax.

$> sysctl -w kernel.shmmax=BYTES

Make shmmax persist across a reboot

The setting clearly didn’t persist when I last rebooted a few minutes ago and caused my postgres to fail.

$> sudo vim /etc/sysctl.conf
# add kernel.shmmax = bytes
kernel.shmmax = 134217728

Django south data migration – resolving a model that have the same names

I have multiple apps with the same model names and am doing a migration between them.

X.models.Order
Y.models.Order

Resolve them via string notation in south.

orm[‘X.Order’]

PS: Note that when creating the blank data migration, you will need to pass in the `–freeze` option to add the model that isn’t in the app the migration is created from.

For example…

python manage.py datamigration appX migrating_appY_data_to_appX --freeze appY

Python — Pickle error: must be convertible to a buffer, not X

This is just an error thrown if you try to give pickle.loads() a random object.

I think this fact should be way closer to the top of google results for this error.

I had just mistaken some variable names and didn’t realize my object could possibly not be the base64 string it was supposed to be : )

OSX Lion – My networking problems turn out to be due to 2ghz channel

Our wifi has two broadcast modes.. 5ghz and 2ghz.

If I’m on the 2ghz network, I get 10-20% packet loss and huge 500ms-2000ms latency.

I switched to the 5ghz network and I’m not having these issues. I’m not looking into this issue any further, but if you happen to be on a dual band router experiencing sudden network sluggishness on OSX Lion, try another wifi signal.

Find where apt-get installed packages

Searching this is somehow not very productive. It’s bordering on impossibility.

If you’re wondering where exactly an apt-get package installed the source / relevant source files into your system, you sometimes need to use dpkg -L to list all files associated with a package.

I’ve done some ugly greps in the past across a huge portion of the filesystem – not a pretty sight.

First try the whereis command, the locate command, and which command if you know an executable from your package.

If you’re just looking for the source though it’s often difficult.

Use dpkg to find apt-get installed source files

Use dpkg -l to list all packages installed by apt-get. Grep it to find your package name.

Use dpkg -L my_package_name to find all source files. A quick glance and you’ll see where your lost code is : )

$> dpkg -l | grep my_package # find the package name
$> dpkg -L my_package_name