scrapy insert into database
I have a working script using scrapy that inserts scraped items into a database using a pipelines class. However this seems to slow down the scrape considerably. I'm using the process item method to insert each scraped item into the database as it is scraped. Would it be faster to output the scraped items into a csv file and then use a stored procedure to insert the data into the database?
def process_item(self, item, spider):
if 'address_line_1' in item:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND address_line_1 = ?
AND suburb = ?
AND state = ?
And postcode = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], item['address_line_1'], item['suburb'],
item['state'], item['postcode'], item['date'], item['address_line_1'], item['suburb'], item['state'],
item['postcode'])
self.conn.commit()
else:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND url = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], '', item['suburb'],
item['state'], item['postcode'], item['date'], item['url'])
self.conn.commit()
return item
python scrapy database-performance
add a comment |
I have a working script using scrapy that inserts scraped items into a database using a pipelines class. However this seems to slow down the scrape considerably. I'm using the process item method to insert each scraped item into the database as it is scraped. Would it be faster to output the scraped items into a csv file and then use a stored procedure to insert the data into the database?
def process_item(self, item, spider):
if 'address_line_1' in item:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND address_line_1 = ?
AND suburb = ?
AND state = ?
And postcode = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], item['address_line_1'], item['suburb'],
item['state'], item['postcode'], item['date'], item['address_line_1'], item['suburb'], item['state'],
item['postcode'])
self.conn.commit()
else:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND url = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], '', item['suburb'],
item['state'], item['postcode'], item['date'], item['url'])
self.conn.commit()
return item
python scrapy database-performance
add a comment |
I have a working script using scrapy that inserts scraped items into a database using a pipelines class. However this seems to slow down the scrape considerably. I'm using the process item method to insert each scraped item into the database as it is scraped. Would it be faster to output the scraped items into a csv file and then use a stored procedure to insert the data into the database?
def process_item(self, item, spider):
if 'address_line_1' in item:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND address_line_1 = ?
AND suburb = ?
AND state = ?
And postcode = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], item['address_line_1'], item['suburb'],
item['state'], item['postcode'], item['date'], item['address_line_1'], item['suburb'], item['state'],
item['postcode'])
self.conn.commit()
else:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND url = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], '', item['suburb'],
item['state'], item['postcode'], item['date'], item['url'])
self.conn.commit()
return item
python scrapy database-performance
I have a working script using scrapy that inserts scraped items into a database using a pipelines class. However this seems to slow down the scrape considerably. I'm using the process item method to insert each scraped item into the database as it is scraped. Would it be faster to output the scraped items into a csv file and then use a stored procedure to insert the data into the database?
def process_item(self, item, spider):
if 'address_line_1' in item:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND address_line_1 = ?
AND suburb = ?
AND state = ?
And postcode = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], item['address_line_1'], item['suburb'],
item['state'], item['postcode'], item['date'], item['address_line_1'], item['suburb'], item['state'],
item['postcode'])
self.conn.commit()
else:
sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
SELECT ?, ?, ?, ?, ?, ?, ?
WHERE NOT EXISTS
( SELECT 1
FROM dbo.PropertyListings
WHERE date = ?
AND url = ?
)
"""
self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], '', item['suburb'],
item['state'], item['postcode'], item['date'], item['url'])
self.conn.commit()
return item
python scrapy database-performance
python scrapy database-performance
edited Nov 27 '18 at 3:38
aydow
2,36011026
2,36011026
asked Nov 27 '18 at 3:22
user3381431user3381431
186
186
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
It looks like you're trying to make an insert per data point. This is indeed very slow!! You should consider bulk insertions after you've collected all of your data, or at least insert in chunks.
Use something like this
def scrape_me_good():
data =
for something in something_else():
# Process data
data.append(process_a_something(something)
bulk_insert(data)
Instead of this
def scrape_bad():
for something in something_else():
single_insert(process_a_something(something)
See this answer for quite a good breakdown of performance in SQL server
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53492263%2fscrapy-insert-into-database%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
It looks like you're trying to make an insert per data point. This is indeed very slow!! You should consider bulk insertions after you've collected all of your data, or at least insert in chunks.
Use something like this
def scrape_me_good():
data =
for something in something_else():
# Process data
data.append(process_a_something(something)
bulk_insert(data)
Instead of this
def scrape_bad():
for something in something_else():
single_insert(process_a_something(something)
See this answer for quite a good breakdown of performance in SQL server
add a comment |
It looks like you're trying to make an insert per data point. This is indeed very slow!! You should consider bulk insertions after you've collected all of your data, or at least insert in chunks.
Use something like this
def scrape_me_good():
data =
for something in something_else():
# Process data
data.append(process_a_something(something)
bulk_insert(data)
Instead of this
def scrape_bad():
for something in something_else():
single_insert(process_a_something(something)
See this answer for quite a good breakdown of performance in SQL server
add a comment |
It looks like you're trying to make an insert per data point. This is indeed very slow!! You should consider bulk insertions after you've collected all of your data, or at least insert in chunks.
Use something like this
def scrape_me_good():
data =
for something in something_else():
# Process data
data.append(process_a_something(something)
bulk_insert(data)
Instead of this
def scrape_bad():
for something in something_else():
single_insert(process_a_something(something)
See this answer for quite a good breakdown of performance in SQL server
It looks like you're trying to make an insert per data point. This is indeed very slow!! You should consider bulk insertions after you've collected all of your data, or at least insert in chunks.
Use something like this
def scrape_me_good():
data =
for something in something_else():
# Process data
data.append(process_a_something(something)
bulk_insert(data)
Instead of this
def scrape_bad():
for something in something_else():
single_insert(process_a_something(something)
See this answer for quite a good breakdown of performance in SQL server
edited Nov 27 '18 at 3:37
answered Nov 27 '18 at 3:32
aydowaydow
2,36011026
2,36011026
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53492263%2fscrapy-insert-into-database%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown