I have a script that can take the final redirection url and save it into CSV file.
The script write codes in 1 column for example A1 then A3 then A5
How to make it write the codes by rows for example A1 B1 C1 D1
please see <a href="http://i.imgur.com/Gl5jdrf.jpg" rel="nofollow">this</a> the red color that what i want, the blue color that is the final result and i don't want it to be like that ( the list in 1 column and goes down A1 A3 A5 and there are a spaces between every cell !! )
this is my final script
import urllib2 import csv import sys url = 'http://www.test.com' u = urllib2.urlopen(url) localfile = open('C:\\test\\file.csv', 'a') writer = csv.writer(localfile) writer.writerow([u.geturl()]) localfile.close()Answer1:
Why not just create CSV by yourself if it will have only one row?
import urllib2 url = 'http://www.google.com' u = urllib2.urlopen(url) localFile = open('C:\\file.csv', 'ab') localFile.write(u.geturl() + ",") localFile.close()Answer2:
writer.writerow() means write the list to a row. So, every time you call it there will be a new row. So the result is what you don't want, they are in one column. If you want to write them in one row. You'd better get a list, then put all the data you want in a row in it, such as
l = [111, 222, 333, 444]. Then call the
writer.writerow(l) just for one time. You can get what you want then.
<strong>edit:</strong> <br /> If the script serve like a daemon, running all the time and waiting for the input:
#10 is the number you want in a row, you can assign it yourself. L =  urls = ['http://www.google.com', 'http://facebookcom', 'http://twitter.com'] for url in urls: u = urllib2.urlopen(url) L.append(u.geturl()) localfile = open('C:\\test\\file.csv', 'w') writer = csv.writer(localfile) writer.writerow(L) localfile.close()
If the script serve like a callback, everytime it gets only one url. I'm quite sorry I don't see any API in
csv module to modify the file.
And as for me, I don't think in this case you need a csv file. One row in the csv usually represents a whole data structure, not like a list. If you want to import the file easily, you can just use the normal file, per url one line or splited by space. Next time when you need it you can simply use
str methods such as
split to treat it and turn it back into list quickly.
>>> 'http://www.google.com\nhttp://www.facebook.com\nhttp://www.twitter.com'.split('\n') ['http://www.google.com', 'http://www.facebook.com', 'http://www.twitter.com'] >>>