I'm using the following code to split a CSV file into multiple chunks (sourced from here)
def worker(chunk):
print len(chunk)
def keyfunc(row):
return row[0]
def main():
pool = mp.Pool()
largefile = 'Counseling.csv'
num_chunks = 10
start_time = time.time()
results = []
with open(largefile) as f:
reader = csv.reader(f)
reader.next()
chunks = itertools.groupby(reader, keyfunc)
while True:
# make a list of num_chunks chunks
groups = [list(chunk) for key, chunk in
itertools.islice(chunks, num_chunks)]
if groups:
result = pool.map(worker, groups)
results.extend(result)
else:
break
pool.close()
pool.join()
However, it seems that the number of chunks always remains constant regardless of the number of chunks that I choose to use. For example, whether I choose to have 1 or 10 chunks, I always get this output when processing a sample file. Ideally, I'd like to chunk a file so that it is equitably distributed.
Note, the real file I am chunking is over 13 million rows long which is why I am processing it piece by piece. That is a must!
6
7
1
...
1
1
94
--- 0.101687192917 seconds ---