Python and Beautiful soup || Regex with Varible before writting to file












1















I would love some assistance or help around an issue i'm currently having.
I'm working on a little python scanner as a project.
The libraries im current importing are:



requests
BeautifulSoup
re
tld


The exact issue is regarding 'scope' of the scanner.
I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.



Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.



import requests
from bs4 import BeautifulSoup
import re
from tld import get_tld, get_fld

#This Grabs the URL
print("Please type in a URL:")
URL = input()

#This strips out everthing leaving only the TLD (Future scope function)
def strip_domain(URL):
global domain_name
domain_name = get_fld(URL)
strip_domain(URL)


#This makes the request, and cleans up the source code
def connection(URL):
r = requests.get(URL)
status = r.status_code
sourcecode = r.text
soup = BeautifulSoup(sourcecode,features="html.parser")
cleanupcode = soup.prettify()

#This Strips the Anchor tags and adds them to the links array
links =
for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
links.append(link.get('href'))

#This writes our clean anchor tags to a file
with open('source.txt', 'w') as f:
for item in links:
f.write("%sn" % item)

connection(URL)


The exact code issue is around the "for link in soup.find" section.
I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.



google.com accepted
google.com/file accepted
maps.google.com not written


If someone could assist me or point me in the right direction i'd appreciate it.
I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.



Additionally, i'm not the strongest with regex, but here is someone that my help.
This is some regex code to catch all variations of http, www, https



(^http://+|www.|https://)


To this I was going to append



.*{}'.format(domain_name)









share|improve this question



























    1















    I would love some assistance or help around an issue i'm currently having.
    I'm working on a little python scanner as a project.
    The libraries im current importing are:



    requests
    BeautifulSoup
    re
    tld


    The exact issue is regarding 'scope' of the scanner.
    I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.



    Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.



    import requests
    from bs4 import BeautifulSoup
    import re
    from tld import get_tld, get_fld

    #This Grabs the URL
    print("Please type in a URL:")
    URL = input()

    #This strips out everthing leaving only the TLD (Future scope function)
    def strip_domain(URL):
    global domain_name
    domain_name = get_fld(URL)
    strip_domain(URL)


    #This makes the request, and cleans up the source code
    def connection(URL):
    r = requests.get(URL)
    status = r.status_code
    sourcecode = r.text
    soup = BeautifulSoup(sourcecode,features="html.parser")
    cleanupcode = soup.prettify()

    #This Strips the Anchor tags and adds them to the links array
    links =
    for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
    links.append(link.get('href'))

    #This writes our clean anchor tags to a file
    with open('source.txt', 'w') as f:
    for item in links:
    f.write("%sn" % item)

    connection(URL)


    The exact code issue is around the "for link in soup.find" section.
    I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.



    google.com accepted
    google.com/file accepted
    maps.google.com not written


    If someone could assist me or point me in the right direction i'd appreciate it.
    I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.



    Additionally, i'm not the strongest with regex, but here is someone that my help.
    This is some regex code to catch all variations of http, www, https



    (^http://+|www.|https://)


    To this I was going to append



    .*{}'.format(domain_name)









    share|improve this question

























      1












      1








      1








      I would love some assistance or help around an issue i'm currently having.
      I'm working on a little python scanner as a project.
      The libraries im current importing are:



      requests
      BeautifulSoup
      re
      tld


      The exact issue is regarding 'scope' of the scanner.
      I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.



      Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.



      import requests
      from bs4 import BeautifulSoup
      import re
      from tld import get_tld, get_fld

      #This Grabs the URL
      print("Please type in a URL:")
      URL = input()

      #This strips out everthing leaving only the TLD (Future scope function)
      def strip_domain(URL):
      global domain_name
      domain_name = get_fld(URL)
      strip_domain(URL)


      #This makes the request, and cleans up the source code
      def connection(URL):
      r = requests.get(URL)
      status = r.status_code
      sourcecode = r.text
      soup = BeautifulSoup(sourcecode,features="html.parser")
      cleanupcode = soup.prettify()

      #This Strips the Anchor tags and adds them to the links array
      links =
      for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
      links.append(link.get('href'))

      #This writes our clean anchor tags to a file
      with open('source.txt', 'w') as f:
      for item in links:
      f.write("%sn" % item)

      connection(URL)


      The exact code issue is around the "for link in soup.find" section.
      I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.



      google.com accepted
      google.com/file accepted
      maps.google.com not written


      If someone could assist me or point me in the right direction i'd appreciate it.
      I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.



      Additionally, i'm not the strongest with regex, but here is someone that my help.
      This is some regex code to catch all variations of http, www, https



      (^http://+|www.|https://)


      To this I was going to append



      .*{}'.format(domain_name)









      share|improve this question














      I would love some assistance or help around an issue i'm currently having.
      I'm working on a little python scanner as a project.
      The libraries im current importing are:



      requests
      BeautifulSoup
      re
      tld


      The exact issue is regarding 'scope' of the scanner.
      I'd like to pass a URL to the code and have the scanner grab all the anchor tags from the page, but only the ones relevant to the base URL, ignoring out of scope links and also subdomains.



      Here is my current code, i'm by no means a programmer, so please excuse sloppy inefficient code.



      import requests
      from bs4 import BeautifulSoup
      import re
      from tld import get_tld, get_fld

      #This Grabs the URL
      print("Please type in a URL:")
      URL = input()

      #This strips out everthing leaving only the TLD (Future scope function)
      def strip_domain(URL):
      global domain_name
      domain_name = get_fld(URL)
      strip_domain(URL)


      #This makes the request, and cleans up the source code
      def connection(URL):
      r = requests.get(URL)
      status = r.status_code
      sourcecode = r.text
      soup = BeautifulSoup(sourcecode,features="html.parser")
      cleanupcode = soup.prettify()

      #This Strips the Anchor tags and adds them to the links array
      links =
      for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
      links.append(link.get('href'))

      #This writes our clean anchor tags to a file
      with open('source.txt', 'w') as f:
      for item in links:
      f.write("%sn" % item)

      connection(URL)


      The exact code issue is around the "for link in soup.find" section.
      I have been trying to parse the array for anchor tags the only contain the base domain, which is the global var "domain_name" so that it only writes the relevant links to the source txt file.



      google.com accepted
      google.com/file accepted
      maps.google.com not written


      If someone could assist me or point me in the right direction i'd appreciate it.
      I was also thinking it would be possible to write every link to the source.txt file and then alter it after removing the 'out of scope' links, but really thought it more beneficial to do it without having to create additional code.



      Additionally, i'm not the strongest with regex, but here is someone that my help.
      This is some regex code to catch all variations of http, www, https



      (^http://+|www.|https://)


      To this I was going to append



      .*{}'.format(domain_name)






      python regex web beautifulsoup python-requests






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 28 '18 at 17:02









      Jonny RiceJonny Rice

      61




      61
























          1 Answer
          1






          active

          oldest

          votes


















          0














          I provide two different situationes. Because i donot agree that href value is xxx.com. Actually you will gain three or four or more kinds of href value, such as /file, folder/file, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.



          Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)





          1. (/{2}([w]+.)?) Matching non-main parts start from //


          2. ([a-z.]+)(?=/?) Match all specified character until we got /, we ought not to use .*(over-match)


          My Code



          import re

          _input = "http://www.google.com/blabla"


          all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
          _partA = all_part[2] # google.com
          _partB = "".join(all_part[1:]) # www.google.com
          print(_partA,_partB)

          site = [
          "google.com",
          "google.com/file",
          "maps.google.com"
          ]
          href = [
          "https://www.google.com",
          "https://www.google.com/file",
          "http://maps.google.com"
          ]
          for ele in site:
          if re.findall("^{}/?".format(_partA),ele):
          print(ele)

          for ele in href:
          if re.findall("{}/?".format(_partB),ele):
          print(ele)





          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524599%2fpython-and-beautiful-soup-regex-with-varible-before-writting-to-file%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            I provide two different situationes. Because i donot agree that href value is xxx.com. Actually you will gain three or four or more kinds of href value, such as /file, folder/file, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.



            Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)





            1. (/{2}([w]+.)?) Matching non-main parts start from //


            2. ([a-z.]+)(?=/?) Match all specified character until we got /, we ought not to use .*(over-match)


            My Code



            import re

            _input = "http://www.google.com/blabla"


            all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
            _partA = all_part[2] # google.com
            _partB = "".join(all_part[1:]) # www.google.com
            print(_partA,_partB)

            site = [
            "google.com",
            "google.com/file",
            "maps.google.com"
            ]
            href = [
            "https://www.google.com",
            "https://www.google.com/file",
            "http://maps.google.com"
            ]
            for ele in site:
            if re.findall("^{}/?".format(_partA),ele):
            print(ele)

            for ele in href:
            if re.findall("{}/?".format(_partB),ele):
            print(ele)





            share|improve this answer






























              0














              I provide two different situationes. Because i donot agree that href value is xxx.com. Actually you will gain three or four or more kinds of href value, such as /file, folder/file, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.



              Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)





              1. (/{2}([w]+.)?) Matching non-main parts start from //


              2. ([a-z.]+)(?=/?) Match all specified character until we got /, we ought not to use .*(over-match)


              My Code



              import re

              _input = "http://www.google.com/blabla"


              all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
              _partA = all_part[2] # google.com
              _partB = "".join(all_part[1:]) # www.google.com
              print(_partA,_partB)

              site = [
              "google.com",
              "google.com/file",
              "maps.google.com"
              ]
              href = [
              "https://www.google.com",
              "https://www.google.com/file",
              "http://maps.google.com"
              ]
              for ele in site:
              if re.findall("^{}/?".format(_partA),ele):
              print(ele)

              for ele in href:
              if re.findall("{}/?".format(_partB),ele):
              print(ele)





              share|improve this answer




























                0












                0








                0







                I provide two different situationes. Because i donot agree that href value is xxx.com. Actually you will gain three or four or more kinds of href value, such as /file, folder/file, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.



                Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)





                1. (/{2}([w]+.)?) Matching non-main parts start from //


                2. ([a-z.]+)(?=/?) Match all specified character until we got /, we ought not to use .*(over-match)


                My Code



                import re

                _input = "http://www.google.com/blabla"


                all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
                _partA = all_part[2] # google.com
                _partB = "".join(all_part[1:]) # www.google.com
                print(_partA,_partB)

                site = [
                "google.com",
                "google.com/file",
                "maps.google.com"
                ]
                href = [
                "https://www.google.com",
                "https://www.google.com/file",
                "http://maps.google.com"
                ]
                for ele in site:
                if re.findall("^{}/?".format(_partA),ele):
                print(ele)

                for ele in href:
                if re.findall("{}/?".format(_partB),ele):
                print(ele)





                share|improve this answer















                I provide two different situationes. Because i donot agree that href value is xxx.com. Actually you will gain three or four or more kinds of href value, such as /file, folder/file, etc. So you have to transform relative path to absolute path, otherwise, you can not gather all of urls.



                Regex: (/{2}([w]+.)?)([a-z.]+)(?=/?)





                1. (/{2}([w]+.)?) Matching non-main parts start from //


                2. ([a-z.]+)(?=/?) Match all specified character until we got /, we ought not to use .*(over-match)


                My Code



                import re

                _input = "http://www.google.com/blabla"


                all_part = re.findall(r"(/{2}([w]+.)?)([a-z.]+)(?=/?)",_input)[0]
                _partA = all_part[2] # google.com
                _partB = "".join(all_part[1:]) # www.google.com
                print(_partA,_partB)

                site = [
                "google.com",
                "google.com/file",
                "maps.google.com"
                ]
                href = [
                "https://www.google.com",
                "https://www.google.com/file",
                "http://maps.google.com"
                ]
                for ele in site:
                if re.findall("^{}/?".format(_partA),ele):
                print(ele)

                for ele in href:
                if re.findall("{}/?".format(_partB),ele):
                print(ele)






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 29 '18 at 3:54

























                answered Nov 29 '18 at 3:32









                kcorlidykcorlidy

                2,2482619




                2,2482619
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53524599%2fpython-and-beautiful-soup-regex-with-varible-before-writting-to-file%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Contact image not getting when fetch all contact list from iPhone by CNContact

                    count number of partitions of a set with n elements into k subsets

                    A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks