Shopify JSON API example using Python Requests

Shopify API XML and JSON example using Python Requests

I didn’t find any full examples of using the Shopify API in either XML or JSON.

I tried using the Shopify Python library but had trouble identifying the currently saved one to many objects (the Product Variants).

I could easily upload NEW variants, but I could not tell which python object variants received which shopify IDs which I absolutely needed as I was using the API for two way synchronization (pushing and pulling changes).

To put the long story short, the culprit was not having the correct Content-Type header for PUT and POST requests.

Using text/json, shopify was returning an unhelpful 500 error with the error message: “Errors: error” – not helpful! I started wondering if I was using the wrong urls… their template suggesting admin/#{id}.json was a bit confusing too. Why not just write admin/{id}.json ?

Set up authentication

Using the python requests library makes this extremely easy.

request = requests.Session(auth=(settings.SHOPIFY_API_KEY, settings.SHOPIFY_API_PASSWORD))
print json.loads(request.get('http://myshop.myshopify.com/admin/assets.json').content)

Create a product

payload = '''{
	  "product": {
	    "body_html": "<strong>Good snowboard!</strong>",
	    "product_type": "Snowboard",
	    "title": "Burton Custom Freestlye 151",
	    "variants": [
	      {
	        "price": "10.00",
	        "option1": "First"
	      },
	      {
	        "price": "20.00",
	        "option1": "Second"
	      }
	    ],
	    "vendor": "Burton"
	  }
	}'''

response = request.post('http://myshop.myshopify.com/admin/products', 
	data=payload,
	headers={'
		'Content-Type': 'application/json', # this is the important part.
	},)
print response.status_code, response.content

Modify an existing product

payload = '''{
	  "product": {
	    "published": false,
	    "id": 632910392
	  }
	}'''
response = request.put('http://myshop.myshopify.com', data=payload, headers={'Content-Type': 'application/json'})

Shopify Behind an Nginx Reverse Proxy

For SEO Purposes and a general move away from the Shopify platform, we at Grove have finally implemented a reverse proxy via Nginx.

Previously, our DNS records for http://www.grovemade.com pointed directly at grove.myshopify.com, while team.grovemade.com pointed to our linode.com VPS. The problem with this approach is SEO – search engines rank subdomains as separate entities. team.grovemade.com is competing with http://www.grovemade.com.

Let’s face it – at the end of the day, Shopify is an amazingly useful platform. It does as well as a general solution can. We used the best of both worlds: Shopify would serve the e-commerce pages, and Linode would serve our custom django project.

The solution to our problem? Enter the proxy server.

Set the DNS records to point all traffic to grovemade.com to our nginx server at linode, and have the nginx server proxy specific URLs to Shopify and the rest to our linode servers.

That means when you access http://www.grovemade.com/collections/foobar, nginx proxies the request to grove.myshopify.com and returns the data to your browser seamlessly.

When you access http://www.grovemade.com/foobar/, nginx proxies the request to a local apache server hosting our django project.

Nginx proxy configuration

Here’s the configuration. It was pretty painless once I realized I could proxy to a subdomain already mapped to Shopify.

Note that if you do not proxy_pass to a domain Shopify knows about via the Shopify admin DNS settings you must manually set the Host parameter via nginx `proxy_set_header Host mystore.myshopify.com`

# index url
# ---------
location = / {
    proxy_pass http://shopify.grovemade.com;

    client_max_body_size    10m;
    client_body_buffer_size     128k;
    proxy_connect_timeout 90;
}


# grove urls
# ----------
location / {
    proxy_pass http://127.0.0.1:8080/;
    proxy_redirect off;

    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
    proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

    client_max_body_size       10m;
    client_body_buffer_size    128k;

    proxy_connect_timeout      90; # time to connect to upstream server
    proxy_send_timeout         120; # time to wait for upstream to accept data
    proxy_read_timeout         120; # time to wait for upstream to return data

    proxy_buffer_size          4k;
    proxy_buffers              4 32k;
    proxy_busy_buffers_size    64k;
    proxy_temp_file_write_size 64k;
}




# shopify urls
# ------------
location ~ ^/(collections|cart|products|shopify|pages|blogs|checkout|admin)/? {
    proxy_pass http://shopify.grovemade.com;

    client_max_body_size    10m;
    client_body_buffer_size     128k;
    proxy_connect_timeout 90;
}

Restart your nginx server and watch your traffic proxied!

Extra useful stuff you can do when you share the same domain

When your django servers and Shopify share the same domain name, you get more than just SEO. You get access to the Shopify cookies… which means we can programmatically make requests to Shopify to enter checkout, or read the contents of the cart.

I’ve just started to mess around with this, but it appears that Shopify sets a `cart` cookie with an ID string that you can easily read in django via `request.COOKIES.get(‘cart’)` .

Add this to the headers when you make a GET request or POST request, and you can manually enter checkout, use the “js” API from python, etc. We’ll be using this to literally only use their checkout page for our site.

Django Storages, Boto, and Amazon S3 Slowness on manage.py collectstatic command fixed

I’ve had a few issues moving everything to Amazon S3.

First, django’s collectstatic management command wasn’t even detecting modified files, so the command would re-upload all files to amazon every invocation.

Django-Storages v1.1.3 fixed this problem, but now I noticed a new problem: modified files were taking less time to detect, but still far too long given that one call was returning the meta data Amazon S3.

After some digging, I found the problem in the modified_time method where the fallback value is being called even if it’s not being used. I moved the fallback to an if block to be executed only if get returns None

entry = self.entries.get(name, self.bucket.get_key(self._encode_name(name))) 
# notice the function being called to populate the default value, regardless
# of whether or not a default is required.

That code should be wrapped in an if statement and only fire the expensive function if the get function fails.

    entry = self.entries.get(name)
    if entry is None:
        entry = self.bucket.get_key(self._encode_name(name))

This change spurred me to create a bitbucket account and learn some basic Mercurial commands : )

Hope it helps somebody else out there with many more than 1k files to sync.

The results

I benchmarked the difference on my local machine between the two functions.

For 1000 files, the new version took less than .1s vs the old function which took 11.5 seconds.

https://bitbucket.org/yuchant/django-storages/overview

jQuery UI Draggable Table Rows Gotchas

I ran into a few Gotchas while dragging table rows. One was an easy fix, the other took some guesswork.

Can’t drag individual TR elements

If you set a “TR“ element to “draggable()“, it will not work because we don’t know how to handle a moving TR element (TRs must be in a table to render properly).

To fix this, wrap the TR in a table temporarily while dragging via the helper option.

            $(".tr_row").draggable({
                helper: function(event) {
                    return $('<div class="drag-row"><table></table></div>')
  .find('table').append($(event.target).closest('tr').clone()).end();
                },
            });

Sortable / Droppable doesn’t work on TR elements, or TR elements disappear during drag

The second problem I ran into was a little more obscure. Your droppable and sortable selectors MUST be applied to a “<tbody>“ tag.

I noticed that when I had the normal “<table><tr><td>“ structure and dragged, jQuery created a <tbody> element to contain the rows.

 $("#my_selector tbody").droppable().draggable();

Python Unicode Graceful Degradation to ASCII

Unicode problems have been one of the harder issues to deal with as external libraries, hardware like label printers and such sometimes don’t support it and throw nasty errors or worse: mysterious silent bugs.

I’ve continually found better ways to deal with these strings. Here’s my journey:

Quick, dirty, and destructive list comprehension

One solution I used while in the shell was just to make sure the “ord(char)“ is below 128.

This method was destructive, but it was acceptable to me given the situation.

unicode_string = u'Österreich'
dirty_fix = ''.join([x for x in unicode_string if ord(x) < 128])

Built in string method encode

Next up I learned about the “encode“ method on a string. It encodes a string to a given encoding, but the important part is the second argument “errors“ which you can pass as a parameter “ignore” or “replace”.

unicode_string = u'Österreich'
unicode_string.encode('ASCII', 'ignore')
# out: 'sterreich'

unicode_string.encode('ASCII', 'replace')
# out: '?sterreich'

Graceful degradation with python standard library unicodedata

The best solution thus far I’ve found is the standard library “unicodedata“ which allows latin unicode characters to degrade gracefully into ASCII.

The library contains a function “normalize“ which is described as follows:

Return the normal form form for the Unicode string unistr. Valid values for form are ‘NFC’, ‘NFKC’, ‘NFD’, and ‘NFKD’.

…snip…

The normal form KD (NFKD) will apply the compatibility decomposition, i.e. replace all compatibility characters with their equivalents. The normal form KC (NFKC) first applies the compatibility decomposition, followed by the canonical composition.

…snip…

The short version: if you use NFD or NFKD, the function converts each unicode character into its “Normal form D“ known as canonical decomposition.

A character may have a similar letter expressed in ASCII such as “Ö“ –> “O“

unicode_string = u'Österreich'
unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore')
# out: 'Osterreich'

This is great for us as .01% of data has these unicode characters and human readability is all that matters.