For many APIs (most I've seen) ratelimiting is a function of your API Key or OAuth credentials. (Google, Twitter, NOAA, Yahoo, Facebook, etc.) The good news is you won't need to spoof your IP, you just need to swap out credentials as they hit there rate limit.
A bit of shameless self promotion here but I wrote a python package specifically for handling this problem.
https://github.com/rawkintrevo/angemilner
https://pypi.python.org/pypi/angemilner/0.2.0
It requires a mongodb daemon and basically you make a page for each one of your keys. So you have 4 email addresses each with a separate key assigned. When you load the key in you specify the maximum calls per day and minimum time between uses.
Load keys:
from angemilner import APIKeyLibrarian
l= APIKeyLibrarian()
l.new_api_key("your_assigned_key1", 'noaa', 1000, .2)
l.new_api_key("your_assigned_key2", 'noaa', 1000, .2)
Then when you run your scraper for instance the NOAA api:
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': 'your_assigned_key'})
becomes:
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': l.check_out_api_key('noaa')['key']})
so if you have 5 keys, l.check_out_api_key returns the key that has the least uses and waits until enough time has elapsed for it to be used again.
Finally to see how often your keys have been used / remaining useage available:
pprint(l.summary())
I didn't write this for R because most scraping is done in python (most of MY scraping). It could be easily ported.
Thats how you can technically get around rate limiting. Ethically ...
UPDATE The example uses Google Places API here