Python and Beautiful soup || Regex with Varible before writting to file
I would love some assistance or help around an issue i'm currently having.
I'm working on a little python scanner as a project.
The libraries im current importing are:
requests
BeautifulSoup
re
tld
The exact issue is regarding 'scope' of the scanner.
I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.
Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.
import requests
from bs4 import BeautifulSoup
import re
from tld import get_tld, get_fld
#This Grabs the URL
print("Please type in a URL:")
URL = input()
#This strips out everthing leaving only the TLD (Future scope function)
def strip_domain(URL):
global domain_name
domain_name = get_fld(URL)
strip_domain(URL)
#This makes the request, and cleans up the source code
def connection(URL):
r = requests.get(URL)
status = r.status_code
sourcecode = r.text
soup = BeautifulSoup(sourcecode,features="html.parser")
cleanupcode = soup.prettify()
#This Strips the Anchor tags and adds them to the links array
links =
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))
#This writes our clean anchor tags to a file
with open('source.txt', 'w') as f:
for item in links:
f.write("%sn" % item)
connection(URL)
The exact code issue is around the "for link in soup.find" section.
I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.
google.com accepted
google.com/file accepted
maps.google.com not written
If someone could assist me or point me in the right direction i'd appreciate it.
I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.
Additionally, i'm not the strongest with regex, but here is someone that my help.
This is some regex code to catch all variations of http, www, https
(^http://+|www.|https://)
To this I was going to append
.*{}'.format(domain_name)
python regex web beautifulsoup python-requests
add a comment |
I would love some assistance or help around an issue i'm currently having.
I'm working on a little python scanner as a project.
The libraries im current importing are:
requests
BeautifulSoup
re
tld
The exact issue is regarding 'scope' of the scanner.
I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.
Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.
import requests
from bs4 import BeautifulSoup
import re
from tld import get_tld, get_fld
#This Grabs the URL
print("Please type in a URL:")
URL = input()
#This strips out everthing leaving only the TLD (Future scope function)
def strip_domain(URL):
global domain_name
domain_name = get_fld(URL)
strip_domain(URL)
#This makes the request, and cleans up the source code
def connection(URL):
r = requests.get(URL)
status = r.status_code
sourcecode = r.text
soup = BeautifulSoup(sourcecode,features="html.parser")
cleanupcode = soup.prettify()
#This Strips the Anchor tags and adds them to the links array
links =
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))
#This writes our clean anchor tags to a file
with open('source.txt', 'w') as f:
for item in links:
f.write("%sn" % item)
connection(URL)
The exact code issue is around the "for link in soup.find" section.
I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.
google.com accepted
google.com/file accepted
maps.google.com not written
If someone could assist me or point me in the right direction i'd appreciate it.
I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.
Additionally, i'm not the strongest with regex, but here is someone that my help.
This is some regex code to catch all variations of http, www, https
(^http://+|www.|https://)
To this I was going to append
.*{}'.format(domain_name)
python regex web beautifulsoup python-requests
add a comment |
I would love some assistance or help around an issue i'm currently having.
I'm working on a little python scanner as a project.
The libraries im current importing are:
requests
BeautifulSoup
re
tld
The exact issue is regarding 'scope' of the scanner.
I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.
Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.
import requests
from bs4 import BeautifulSoup
import re
from tld import get_tld, get_fld
#This Grabs the URL
print("Please type in a URL:")
URL = input()
#This strips out everthing leaving only the TLD (Future scope function)
def strip_domain(URL):
global domain_name
domain_name = get_fld(URL)
strip_domain(URL)
#This makes the request, and cleans up the source code
def connection(URL):
r = requests.get(URL)
status = r.status_code
sourcecode = r.text
soup = BeautifulSoup(sourcecode,features="html.parser")
cleanupcode = soup.prettify()
#This Strips the Anchor tags and adds them to the links array
links =
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))
#This writes our clean anchor tags to a file
with open('source.txt', 'w') as f:
for item in links:
f.write("%sn" % item)
connection(URL)
The exact code issue is around the "for link in soup.find" section.
I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.
google.com accepted
google.com/file accepted
maps.google.com not written
If someone could assist me or point me in the right direction i'd appreciate it.
I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.
Additionally, i'm not the strongest with regex, but here is someone that my help.
This is some regex code to catch all variations of http, www, https
(^http://+|www.|https://)
To this I was going to append
.*{}'.format(domain_name)
python regex web beautifulsoup python-requests
I would love some assistance or help around an issue i'm currently having.
I'm working on a little python scanner as a project.
The libraries im current importing are:
requests
BeautifulSoup
re
tld
The exact issue is regarding 'scope' of the scanner.
I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.
Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.
import requests
from bs4 import BeautifulSoup
import re
from tld import get_tld, get_fld
#This Grabs the URL
print("Please type in a URL:")
URL = input()
#This strips out everthing leaving only the TLD (Future scope function)
def strip_domain(URL):
global domain_name
domain_name = get_fld(URL)
strip_domain(URL)
#This makes the request, and cleans up the source code
def connection(URL):
r = requests.get(URL)
status = r.status_code
sourcecode = r.text
soup = BeautifulSoup(sourcecode,features="html.parser")
cleanupcode = soup.prettify()
#This Strips the Anchor tags and adds them to the links array
links =
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))
#This writes our clean anchor tags to a file
with open('source.txt', 'w') as f:
for item in links:
f.write("%sn" % item)
connection(URL)
The exact code issue is around the "for link in soup.find" section.
I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.
google.com accepted
google.com/file accepted
maps.google.com not written
If someone could assist me or point me in the right direction i'd appreciate it.
I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.
Additionally, i'm not the strongest with regex, but here is someone that my help.
This is some regex code to catch all variations of http, www, https
(^http://+|www.|https://)
To this I was going to append
.*{}'.format(domain_name)
python regex web beautifulsoup python-requests
python regex web beautifulsoup python-requests
asked Nov 28 '18 at 17:02
Jonny RiceJonny Rice
61
61
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
I provide two different situationes. Because i donot agree that href value is xxx.com
. Actually you will gain three or four or more kinds of href value, such as /file
, folder/file
, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.
Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)
(/{2}([w]+.)?)
Matching non-main parts start from //
([a-z.]+)(?=/?)
Match all specified character until we got /, we ought not to use.*
(over-match)
My Code
import re
_input = "http://www.google.com/blabla"
all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
_partA = all_part[2] # google.com
_partB = "".join(all_part[1:]) # www.google.com
print(_partA,_partB)
site = [
"google.com",
"google.com/file",
"maps.google.com"
]
href = [
"https://www.google.com",
"https://www.google.com/file",
"http://maps.google.com"
]
for ele in site:
if re.findall("^{}/?".format(_partA),ele):
print(ele)
for ele in href:
if re.findall("{}/?".format(_partB),ele):
print(ele)
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524599%2fpython-and-beautiful-soup-regex-with-varible-before-writting-to-file%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I provide two different situationes. Because i donot agree that href value is xxx.com
. Actually you will gain three or four or more kinds of href value, such as /file
, folder/file
, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.
Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)
(/{2}([w]+.)?)
Matching non-main parts start from //
([a-z.]+)(?=/?)
Match all specified character until we got /, we ought not to use.*
(over-match)
My Code
import re
_input = "http://www.google.com/blabla"
all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
_partA = all_part[2] # google.com
_partB = "".join(all_part[1:]) # www.google.com
print(_partA,_partB)
site = [
"google.com",
"google.com/file",
"maps.google.com"
]
href = [
"https://www.google.com",
"https://www.google.com/file",
"http://maps.google.com"
]
for ele in site:
if re.findall("^{}/?".format(_partA),ele):
print(ele)
for ele in href:
if re.findall("{}/?".format(_partB),ele):
print(ele)
add a comment |
I provide two different situationes. Because i donot agree that href value is xxx.com
. Actually you will gain three or four or more kinds of href value, such as /file
, folder/file
, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.
Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)
(/{2}([w]+.)?)
Matching non-main parts start from //
([a-z.]+)(?=/?)
Match all specified character until we got /, we ought not to use.*
(over-match)
My Code
import re
_input = "http://www.google.com/blabla"
all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
_partA = all_part[2] # google.com
_partB = "".join(all_part[1:]) # www.google.com
print(_partA,_partB)
site = [
"google.com",
"google.com/file",
"maps.google.com"
]
href = [
"https://www.google.com",
"https://www.google.com/file",
"http://maps.google.com"
]
for ele in site:
if re.findall("^{}/?".format(_partA),ele):
print(ele)
for ele in href:
if re.findall("{}/?".format(_partB),ele):
print(ele)
add a comment |
I provide two different situationes. Because i donot agree that href value is xxx.com
. Actually you will gain three or four or more kinds of href value, such as /file
, folder/file
, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.
Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)
(/{2}([w]+.)?)
Matching non-main parts start from //
([a-z.]+)(?=/?)
Match all specified character until we got /, we ought not to use.*
(over-match)
My Code
import re
_input = "http://www.google.com/blabla"
all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
_partA = all_part[2] # google.com
_partB = "".join(all_part[1:]) # www.google.com
print(_partA,_partB)
site = [
"google.com",
"google.com/file",
"maps.google.com"
]
href = [
"https://www.google.com",
"https://www.google.com/file",
"http://maps.google.com"
]
for ele in site:
if re.findall("^{}/?".format(_partA),ele):
print(ele)
for ele in href:
if re.findall("{}/?".format(_partB),ele):
print(ele)
I provide two different situationes. Because i donot agree that href value is xxx.com
. Actually you will gain three or four or more kinds of href value, such as /file
, folder/file
, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.
Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)
(/{2}([w]+.)?)
Matching non-main parts start from //
([a-z.]+)(?=/?)
Match all specified character until we got /, we ought not to use.*
(over-match)
My Code
import re
_input = "http://www.google.com/blabla"
all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
_partA = all_part[2] # google.com
_partB = "".join(all_part[1:]) # www.google.com
print(_partA,_partB)
site = [
"google.com",
"google.com/file",
"maps.google.com"
]
href = [
"https://www.google.com",
"https://www.google.com/file",
"http://maps.google.com"
]
for ele in site:
if re.findall("^{}/?".format(_partA),ele):
print(ele)
for ele in href:
if re.findall("{}/?".format(_partB),ele):
print(ele)
edited Nov 29 '18 at 3:54
answered Nov 29 '18 at 3:32
kcorlidykcorlidy
2,2482619
2,2482619
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524599%2fpython-and-beautiful-soup-regex-with-varible-before-writting-to-file%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown