django formfield_for_dbfield : __init__ unexpected keyword argument ‘request’

Quick post:

formfield_for_dbfield examples floating around out there do not work because a lot of them directly call `db_field.formfield(**kwargs)`.

Either remove that call and let the super call take care of it (such as below) or remove `request` from the kwargs by calling kwargs.pop(‘request’)

It turns out the super call now removes the request kwarg before moving further in processing, so if you manually call db_field.formfield(**kwargs) your code will have this extra keyword.

def formfield_for_dbfield(self, db_field, **kwargs): 
    if db_field.attname == 'long_description':
         kwargs['widget'] = CLEditorWidget()
    return super(ProductAdmin, self).formfield_for_dbfield(db_field, **kwargs)

I believe this hook isn’t supposed to be used even, as it’s not really in the docs.

I use it because formfield_overrides is global and doesn’t have per-field granularity and formfield_for_FK/M2M only works for relationships.

If I’m missing a better hook, just let me know. I haven’t really dug around as this is perfectly acceptable.

Sorl Thumbnail Convert PNG to JPEG with Background Color

The benefits of PNGs are obvious: it’s lossless and it supports transparency (alpha).

Unfortunately, that means these files are humongous. As PNGs get more complex, every pixel gets recorded so a product shot might be 1000KB as PNG, and 50KB as a JPEG looking not all that much worse.

If you try to convert a PNG to a JPEG via Sorl thumbnail library, it will by default use a white background, and for some reason causes artifacting as well on partial alphas such as shadows.

Ideally, we have a super high quality PNG that can be converted and optimized to a JPG on the fly, so I created a PIL sorl thumbnail engine that accepts a new template tag argument: “background” – which is an RGB color that will be applied as the background color to an RGBA image source.

Apparently, my solution is the simplest, and fastest of the RGBA->JPG methods described at this stack overflow question.

The Code: Converting Transparent PNGs to JPEGs with Background Colors

"""
Sorl Thumbnail Engine that accepts background color
---------------------------------------------------

Created on Sunday, February 2012 by Yuji Tomita
"""
from PIL import Image, ImageColor
from sorl.thumbnail.engines.pil_engine import Engine


class Engine(Engine):
	def create(self, image, geometry, options):
		thumb = super(Engine, self).create(image, geometry, options)
		if options.get('background'):		
			try:
				background = Image.new('RGB', thumb.size, ImageColor.getcolor(options.get('background'), 'RGB'))
				background.paste(thumb, mask=thumb.split()[3]) # 3 is the alpha of an RGBA image.
				return background
			except Exception, e:
				return thumb
		return thumb

Now, just modify your thumbnail engine setting

THUMBNAIL_ENGINE = 'path.to.Engine'

Usage

{% thumbnail my_file "100x100" format="JPEG" background="#333333" as thumb %}
   <img src="{{ thumb.url }}" />
{% endthumbnail %}

Done!

Conclusion

Now, we can store/upload one copy of a perfect PNG and have the website automatically generate an optimized JPEG regardless of the situation. Seriously amazing.

Need a smaller PNG? done.
Need a smaller JPEG with background color black? done.

https://gist.github.com/1920535

S3 and Django Staticfiles collectstatic command upload only changed files

I just revisited a problem getting S3 and collectstatic to play nicely via django-storages.

I spent an hour wondering what had changed in django or django-storages that started to force the collectstatic command to always upload all images.

I did a line by line code comparison between my two django and storages installations and couldn’t find anything odd.

I started to google more.

Hello, me!

I ended up on this site: http://c4urself.posterous.com/djangos-collectstatic-with-s3boto which actually linked to a few of my original posts.

For the record this happens to me on a daily basis (my memory sucks) but this was rather unexpected in the vastness of the world wide internets.

You MUST install python-dateutil==1.5

Collectstatic will silently fail trying to detect the modified time of the S3 files, and consequently will always upload a new file. Since there was no error, I had no idea something was failing until I read the above post which pointed to the specific function (mentioned by me originally, apparently).

If your collectstatic command is always uploading all files, make sure you have python-dateutil==1.5 installed!

Shopify JSON API example using Python Requests

Shopify API XML and JSON example using Python Requests

I didn’t find any full examples of using the Shopify API in either XML or JSON.

I tried using the Shopify Python library but had trouble identifying the currently saved one to many objects (the Product Variants).

I could easily upload NEW variants, but I could not tell which python object variants received which shopify IDs which I absolutely needed as I was using the API for two way synchronization (pushing and pulling changes).

To put the long story short, the culprit was not having the correct Content-Type header for PUT and POST requests.

Using text/json, shopify was returning an unhelpful 500 error with the error message: “Errors: error” – not helpful! I started wondering if I was using the wrong urls… their template suggesting admin/#{id}.json was a bit confusing too. Why not just write admin/{id}.json ?

Set up authentication

Using the python requests library makes this extremely easy.

request = requests.Session(auth=(settings.SHOPIFY_API_KEY, settings.SHOPIFY_API_PASSWORD))
print json.loads(request.get('http://myshop.myshopify.com/admin/assets.json').content)

Create a product

payload = '''{
	  "product": {
	    "body_html": "<strong>Good snowboard!</strong>",
	    "product_type": "Snowboard",
	    "title": "Burton Custom Freestlye 151",
	    "variants": [
	      {
	        "price": "10.00",
	        "option1": "First"
	      },
	      {
	        "price": "20.00",
	        "option1": "Second"
	      }
	    ],
	    "vendor": "Burton"
	  }
	}'''

response = request.post('http://myshop.myshopify.com/admin/products', 
	data=payload,
	headers={'
		'Content-Type': 'application/json', # this is the important part.
	},)
print response.status_code, response.content

Modify an existing product

payload = '''{
	  "product": {
	    "published": false,
	    "id": 632910392
	  }
	}'''
response = request.put('http://myshop.myshopify.com', data=payload, headers={'Content-Type': 'application/json'})

Shopify Behind an Nginx Reverse Proxy

For SEO Purposes and a general move away from the Shopify platform, we at Grove have finally implemented a reverse proxy via Nginx.

Previously, our DNS records for http://www.grovemade.com pointed directly at grove.myshopify.com, while team.grovemade.com pointed to our linode.com VPS. The problem with this approach is SEO – search engines rank subdomains as separate entities. team.grovemade.com is competing with http://www.grovemade.com.

Let’s face it – at the end of the day, Shopify is an amazingly useful platform. It does as well as a general solution can. We used the best of both worlds: Shopify would serve the e-commerce pages, and Linode would serve our custom django project.

The solution to our problem? Enter the proxy server.

Set the DNS records to point all traffic to grovemade.com to our nginx server at linode, and have the nginx server proxy specific URLs to Shopify and the rest to our linode servers.

That means when you access http://www.grovemade.com/collections/foobar, nginx proxies the request to grove.myshopify.com and returns the data to your browser seamlessly.

When you access http://www.grovemade.com/foobar/, nginx proxies the request to a local apache server hosting our django project.

Nginx proxy configuration

Here’s the configuration. It was pretty painless once I realized I could proxy to a subdomain already mapped to Shopify.

Note that if you do not proxy_pass to a domain Shopify knows about via the Shopify admin DNS settings you must manually set the Host parameter via nginx `proxy_set_header Host mystore.myshopify.com`

# index url
# ---------
location = / {
    proxy_pass http://shopify.grovemade.com;

    client_max_body_size    10m;
    client_body_buffer_size     128k;
    proxy_connect_timeout 90;
}


# grove urls
# ----------
location / {
    proxy_pass http://127.0.0.1:8080/;
    proxy_redirect off;

    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
    proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

    client_max_body_size       10m;
    client_body_buffer_size    128k;

    proxy_connect_timeout      90; # time to connect to upstream server
    proxy_send_timeout         120; # time to wait for upstream to accept data
    proxy_read_timeout         120; # time to wait for upstream to return data

    proxy_buffer_size          4k;
    proxy_buffers              4 32k;
    proxy_busy_buffers_size    64k;
    proxy_temp_file_write_size 64k;
}




# shopify urls
# ------------
location ~ ^/(collections|cart|products|shopify|pages|blogs|checkout|admin)/? {
    proxy_pass http://shopify.grovemade.com;

    client_max_body_size    10m;
    client_body_buffer_size     128k;
    proxy_connect_timeout 90;
}

Restart your nginx server and watch your traffic proxied!

Extra useful stuff you can do when you share the same domain

When your django servers and Shopify share the same domain name, you get more than just SEO. You get access to the Shopify cookies… which means we can programmatically make requests to Shopify to enter checkout, or read the contents of the cart.

I’ve just started to mess around with this, but it appears that Shopify sets a `cart` cookie with an ID string that you can easily read in django via `request.COOKIES.get(‘cart’)` .

Add this to the headers when you make a GET request or POST request, and you can manually enter checkout, use the “js” API from python, etc. We’ll be using this to literally only use their checkout page for our site.

Django Storages, Boto, and Amazon S3 Slowness on manage.py collectstatic command fixed

I’ve had a few issues moving everything to Amazon S3.

First, django’s collectstatic management command wasn’t even detecting modified files, so the command would re-upload all files to amazon every invocation.

Django-Storages v1.1.3 fixed this problem, but now I noticed a new problem: modified files were taking less time to detect, but still far too long given that one call was returning the meta data Amazon S3.

After some digging, I found the problem in the modified_time method where the fallback value is being called even if it’s not being used. I moved the fallback to an if block to be executed only if get returns None

entry = self.entries.get(name, self.bucket.get_key(self._encode_name(name))) 
# notice the function being called to populate the default value, regardless
# of whether or not a default is required.

That code should be wrapped in an if statement and only fire the expensive function if the get function fails.

    entry = self.entries.get(name)
    if entry is None:
        entry = self.bucket.get_key(self._encode_name(name))

This change spurred me to create a bitbucket account and learn some basic Mercurial commands : )

Hope it helps somebody else out there with many more than 1k files to sync.

The results

I benchmarked the difference on my local machine between the two functions.

For 1000 files, the new version took less than .1s vs the old function which took 11.5 seconds.

https://bitbucket.org/yuchant/django-storages/overview