If you live in the Upper Midwest, you know what I mean: There’s no place like Culvers. And Kwik Trip. And Menards. (All Wisconsin-based, for that lovely hometown charm.) But the worst part of visiting all three is the period of time between stepping out of one and arriving at another. Where is the best location we can minimize this pain?
First, obligatory memes:
I grew up near Hutchinson, MN. Hutchinson gained a Menards in 20011, a Culvers in 20052, and a Kwik Trip in 20133. They’re all within a few blocks of each other—a good start for minimizing the total trip time of a Culver’s–Kwik Trip–Menards lap.
But is there a location that has a closer clustering? Here we brute force every Culver’s/Kwik Trip/Menards combination, checking the sum of their distances and finding the top ten results. (We use the sum of distances as a rough estimate of the stores’ total distance apart; one could also find the “center of mass”—the point where the sum of distances to each of the other points is minimal—and find the sum of distances to the other points, or any number of other scoring methods.)
import json
import math
# Should take ~20 seconds to run
# http://edwilliams.org/avform147.htm#Dist
def dist(store1, store2):
lat1 = store1['lat'] * math.pi/180
lon1 = store1['long'] * math.pi/180
lat2 = store2['lat'] * math.pi/180
lon2 = store2['long'] * math.pi/180
distance_in_radians = 2*math.asin(math.sqrt((math.sin((lat1-lat2)/2))**2 +
math.cos(lat1)*math.cos(lat2)*(math.sin((lon1-lon2)/2))**2))
earth_radius = 3959 # freedom units (miles)
return distance_in_radians * earth_radius
def total_dist(store1, store2, store3):
return dist(store1, store2) + dist(store2, store3) + dist(store3, store1)
# Format an address
def a(x):
return f"[{x['address']}, {x['city']}, {x['state']} {x['zip']}]({x['website']})"
if __name__ == '__main__':
# A naïve approach might be to check for each city how close the Culver's,
# Kwik Trip, and Menards are (if a city has all three). However, this would
# break if there happen to be close clusterings across city lines. Instead,
# we make a list of the locations of each chain, and process that. (While
# that does make a difference for some results, none of them make it into
# the top 10.)
with open("stores.json", 'r') as f:
stores = json.loads(f.read())
culverses = list(filter(lambda s: s['chain'] == "Culver's", stores))
kwiktrips = list(filter(lambda s: s['chain'] == "Kwik Trip", stores))
menardses = list(filter(lambda s: s['chain'] == "Menards", stores))
top_matches = []
for culvers in culverses:
for kwiktrip in kwiktrips:
# This is a hack to speed the program up; without this the program
# takes hours to run. With this optimization, it takes <1 minute.
if abs(kwiktrip['lat'] - culvers['lat']) + abs(kwiktrip['long'] - culvers['long']) > 2: # More than ~50 miles apart
continue
for menards in menardses:
if abs(kwiktrip['lat'] - menards['lat']) + abs(kwiktrip['long'] - menards['long']) > 2:
continue
top_matches.append({
"culvers": culvers,
"kwiktrip": kwiktrip,
"menards": menards,
"dist": total_dist(culvers, kwiktrip, menards),
})
top_matches.sort(key=lambda s: s['dist'])
top_matches = top_matches[:10]
print("| Rank | Culver's | Kwik Trip | Menards | Total distance |")
print("|-|-|-|-|-|")
for rank, top_match in enumerate(top_matches, start=1):
print(f"| {rank} | {a(top_match['culvers'])} | {a(top_match['kwiktrip'])} | {a(top_match['menards'])} | {top_match['dist']:.2f} miles |")
Waite Park, MN comes out on top:
with Muscatine, IA a close second:
And Hutchinson does make the list! The 0.74 miles for a Culvers-Menards-Kwik Trip lap puts it in 6th place.
While we have all this fun data, let’s check out a few more things! I’ve noticed that Menardses and Culver’ses seem to have a habit of being located right next to each other though I have yet to see a Culver’s in a Menards, the way you might see a Subway in a Walmart, or something). This was true in Hutchinson, Eden Prairie4, Burnsville/Savage, Richfield/Bloomington – every Menards I’ve ever been to, I believe, except Apple Valley.
It’s very possible that this is just confirmation bias, though – it’s not true in Apple Valley, at least. When else isn’t this the case? That is, what cities have the biggest distances between their Culver’s and Menards?
import json
import math
from find_closest_trio import dist, a
if __name__ == '__main__':
most_widely_spaced_pairs = []
with open("stores.json", 'r') as f:
stores = json.loads(f.read())
culverses = list(filter(lambda s: s['chain'] == "Culver's", stores))
menardses = list(filter(lambda s: s['chain'] == "Menards", stores))
for culvers in culverses:
for menards in menardses:
if culvers['city'] == menards['city'] and culvers['state'] == menards['state']:
most_widely_spaced_pairs.append((culvers, menards))
most_widely_spaced_pairs.sort(key=lambda x: dist(x[0], x[1]), reverse=True)
most_widely_spaced_pairs = most_widely_spaced_pairs[:10]
print("| Rank | Culver's | Menards | Distance |")
print("|-|-|-|-|")
for rank, pair in enumerate(most_widely_spaced_pairs):
print(f"| {rank+1} | {a(pair[0])} | {a(pair[1])} | {dist(pair[0], pair[1]):.2f} miles |")
Rank | Culver’s | Menards | Distance |
---|---|---|---|
1 | 4701 Kentucky Ave, Indianapolis 46221 | 7145 E 96Th St, Indianapolis 46250 | 19.41 miles |
2 | 7953 State Line Rd, Kansas City 64114 | 3701 Nw 90Th St, Kansas City 64154 | 18.84 miles |
3 | 7105 E 96th St, Indianapolis 46250 | 7140 S Emerson Ave, Indianapolis 46237 | 18.28 miles |
4 | 8232 Country Village Dr, Indianapolis 46214 | 7145 E 96Th St, Indianapolis 46250 | 17.85 miles |
5 | 5020 W 71st St, Indianapolis 46268 | 7140 S Emerson Ave, Indianapolis 46237 | 17.36 miles |
6 | 1444 Rentra Dr, Columbus 43228 | 6800 E Broad St, Columbus 43213 | 17.14 miles |
7 | 11050 S. Doty Avenue W, Chicago 60628 | 2601 N Clybourn Ave, Chicago 60614 | 16.82 miles |
8 | 11050 S. Doty Avenue W, Chicago 60628 | 4501 W North Ave, Chicago 60639 | 16.38 miles |
9 | 575 W Layton Ave, Milwaukee 53207 | 8110 W Brown Deer Rd, Milwaukee 53223 | 16.07 miles |
10 | 5115 Shear Avenue, Indianapolis 46203 | 7145 E 96Th St, Indianapolis 46250 | 15.77 miles |
Unsurprisingly, it’s just a bunch of really big cities with multiple stores across their respective metro areas. (With some cities with widely spread locations highly overrepresented, too!) Here’s Indianapolis:
Finally, since I have all this neat data, here’s a map of every Culvers, Kwik Trip, and Menards.
Here's how I retrieved the data I used:
#!/usr/bin/python3
import csv
import requests
import json # TODO: remove
stores = []
# Culvers
print("Searching for Culver's locations")
response = requests.get('https://hosted.where2getit.com/lite?action=getstategeojsonmap&appkey=EC84928C-9C0B-11E4-B999-D8F4B845EC6E').json()
state_names = {}
for f in response['states']['features']:
state_names[f['properties']['name']] = f['properties']['regiondesc']
for s in response['labels']['features']:
if s['properties']['num_stores'] > 0:
state = s['properties']['name']
print(f"Searching {state}...", end="")
json_data = {
'request': {
'appkey': '1099682E-D719-11E6-A0C4-347BDEB8F1E5',
'formdata': {
'geolocs': {
'geoloc': [
{
'addressline': state,
'state': state_names[state],
},
],
},
'stateonly': 1,
},
},
}
response = requests.post('https://hosted.where2getit.com/culvers/rest/locatorsearch', json=json_data)
count = 0
for store in response.json()['response']['collection']:
# I tried a bunch of other things and this is the only one that matched :facepalm:
if "coming soon" in store['name'].lower(): # store['comingsoondate']: # not store['dine_in'] and not store['takeout']: # not store['opendate']: # store['comingsoondate']:
# print(f"{store['name']} not yet open")
continue
stores.append({
'chain': "Culver's",
'lat': float(store['latitude']),
'long': float(store['longitude']),
'address': store['address1'],
'city': store['city'],
'state': store['state'],
'zip': store['postalcode'],
'website': store['url'],
})
count += 1
print(count)
if not count == s['properties']['num_stores']:
print(f"Inequal for {state}: {count} != {s['properties']['num_stores']}")
print(f"""{len(stores)} locations found""")
# Kwik Trip
# Export to CSV from https://www.kwiktrip.com/Maps-Downloads/Store-List
print("Searching for Kwik Trip locations")
kwiktrip_count = 0
with open('stores.csv') as f:
reader = csv.DictReader(f)
for row in reader:
kwiktrip_count += 1
stores.append({
'chain': "Kwik Trip",
'lat': float(row['Latitude']),
'long': float(row['Longitude']),
'address': row['Address'].title(),
'city': row['City'].title(),
'state': row['State'],
'zip': row['Zip'],
'website': f"https://www.kwiktrip.com/locator/store?id={row['Store Number']}",
})
print(f"{kwiktrip_count} locations found")
# # Menards
print("Searching for Menards locations")
# Visit https://www.menards.com/store-details/locator.html; view source; find a
# value for `data-initial-stores`; copy its value into data here. Incapsula, the
# DDoS mitigation platform that Menards uses, makes this reeeeaaaally hard to do
# with `requests`. Those lines should look something like this:
#
# 125 | <meta
# 126 | id="initialStores"
# 127 | data-initial-stores="[{"number":3132,..."
# 128 | >
# 129 | </head>
#
# (line 127 is the one you want here)
data="[]"
menardses = json.loads(data.replace(""", '"'))
for menards in menardses:
stores.append({
'chain': "Menards",
'lat': float(menards['latitude']),
'long': float(menards['longitude']),
'address': menards['street'].title(),
'city': menards['city'].title(),
'state': menards['state'],
'zip': menards['zip'],
'website': f"https://www.menards.com/store-details/store.html?store={menards['number']}",
})
print(f"{len(menardses)} locations found")
with open('stores.json', 'w') as f:
f.write(json.dumps(stores))
-
Source: Eric says so. (The Menards API used to expose this information, but has since stopped.) ↩︎
-
https://weblink.hutchinsonmn.gov/WebLink/DocView.aspx?id=16157&dbid=0&repo=Hutchinson&cr=1 p.36-38 ↩︎
-
https://mblsportal.sos.state.mn.us/Business/SearchDetails?filingGuid=3adf3dfd-6c16-e311-8e3a-001ec94ffe7f ↩︎
-
The Culver’s here moved across the street a few years ago; it’s now on the north side of 212. However, the Culver’s restaurant, while updated to the correct address, still has the wrong coordinates: https://www.culvers.com/locator?key=220 ↩︎