Webscraping on BeautifulSoup and Git Bash and transferring to a CSV











up vote
0
down vote

favorite












So I have been Webscrapping on a website with a table, which I want to ideally webscrape into a excel sheet and keep it into a table, I will input what I have, I have used scrapy and BeautifulSoup and I have problem with both. Help would be great!



import requests
import csv
from bs4 import BeautifulSoup

url = 'https://pcpartpicker.com/products/video-card/'
r = requests.get(url)
html = r.text

soup = BeautifulSoup(html, 'lxml')

name = soup.find('tbody', {"id":"category_content"})

print(name.text)
~









share|improve this question




























    up vote
    0
    down vote

    favorite












    So I have been Webscrapping on a website with a table, which I want to ideally webscrape into a excel sheet and keep it into a table, I will input what I have, I have used scrapy and BeautifulSoup and I have problem with both. Help would be great!



    import requests
    import csv
    from bs4 import BeautifulSoup

    url = 'https://pcpartpicker.com/products/video-card/'
    r = requests.get(url)
    html = r.text

    soup = BeautifulSoup(html, 'lxml')

    name = soup.find('tbody', {"id":"category_content"})

    print(name.text)
    ~









    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      So I have been Webscrapping on a website with a table, which I want to ideally webscrape into a excel sheet and keep it into a table, I will input what I have, I have used scrapy and BeautifulSoup and I have problem with both. Help would be great!



      import requests
      import csv
      from bs4 import BeautifulSoup

      url = 'https://pcpartpicker.com/products/video-card/'
      r = requests.get(url)
      html = r.text

      soup = BeautifulSoup(html, 'lxml')

      name = soup.find('tbody', {"id":"category_content"})

      print(name.text)
      ~









      share|improve this question















      So I have been Webscrapping on a website with a table, which I want to ideally webscrape into a excel sheet and keep it into a table, I will input what I have, I have used scrapy and BeautifulSoup and I have problem with both. Help would be great!



      import requests
      import csv
      from bs4 import BeautifulSoup

      url = 'https://pcpartpicker.com/products/video-card/'
      r = requests.get(url)
      html = r.text

      soup = BeautifulSoup(html, 'lxml')

      name = soup.find('tbody', {"id":"category_content"})

      print(name.text)
      ~






      python python-2.7 beautifulsoup scrapy






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 22 at 22:06









      darthbith

      5,99542648




      5,99542648










      asked Nov 22 at 16:56









      Ailis Curran

      153




      153
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          Learn to use Selenium or Scrapy with Splash,my recommendation for small tasks is Selenium, you can learn basics in a day.



          from selenium import webdriver
          from bs4 import BeautifulSoup as bs
          import time
          options = webdriver.ChromeOptions()
          #install chrome if none and download chromedriver and add path to it
          driver = webdriver.Chrome(executable_path="D:/Python/chromedriver", options=options)
          driver.get("https://pcpartpicker.com/products/video-card/")
          time.sleep(2)
          soup = bs(driver.page_source,'lxml')
          name = soup.find('tbody', {"id":"category_content"})
          for i in name:
          print(i.find('a').text)





          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53435458%2fwebscraping-on-beautifulsoup-and-git-bash-and-transferring-to-a-csv%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            Learn to use Selenium or Scrapy with Splash,my recommendation for small tasks is Selenium, you can learn basics in a day.



            from selenium import webdriver
            from bs4 import BeautifulSoup as bs
            import time
            options = webdriver.ChromeOptions()
            #install chrome if none and download chromedriver and add path to it
            driver = webdriver.Chrome(executable_path="D:/Python/chromedriver", options=options)
            driver.get("https://pcpartpicker.com/products/video-card/")
            time.sleep(2)
            soup = bs(driver.page_source,'lxml')
            name = soup.find('tbody', {"id":"category_content"})
            for i in name:
            print(i.find('a').text)





            share|improve this answer



























              up vote
              0
              down vote













              Learn to use Selenium or Scrapy with Splash,my recommendation for small tasks is Selenium, you can learn basics in a day.



              from selenium import webdriver
              from bs4 import BeautifulSoup as bs
              import time
              options = webdriver.ChromeOptions()
              #install chrome if none and download chromedriver and add path to it
              driver = webdriver.Chrome(executable_path="D:/Python/chromedriver", options=options)
              driver.get("https://pcpartpicker.com/products/video-card/")
              time.sleep(2)
              soup = bs(driver.page_source,'lxml')
              name = soup.find('tbody', {"id":"category_content"})
              for i in name:
              print(i.find('a').text)





              share|improve this answer

























                up vote
                0
                down vote










                up vote
                0
                down vote









                Learn to use Selenium or Scrapy with Splash,my recommendation for small tasks is Selenium, you can learn basics in a day.



                from selenium import webdriver
                from bs4 import BeautifulSoup as bs
                import time
                options = webdriver.ChromeOptions()
                #install chrome if none and download chromedriver and add path to it
                driver = webdriver.Chrome(executable_path="D:/Python/chromedriver", options=options)
                driver.get("https://pcpartpicker.com/products/video-card/")
                time.sleep(2)
                soup = bs(driver.page_source,'lxml')
                name = soup.find('tbody', {"id":"category_content"})
                for i in name:
                print(i.find('a').text)





                share|improve this answer














                Learn to use Selenium or Scrapy with Splash,my recommendation for small tasks is Selenium, you can learn basics in a day.



                from selenium import webdriver
                from bs4 import BeautifulSoup as bs
                import time
                options = webdriver.ChromeOptions()
                #install chrome if none and download chromedriver and add path to it
                driver = webdriver.Chrome(executable_path="D:/Python/chromedriver", options=options)
                driver.get("https://pcpartpicker.com/products/video-card/")
                time.sleep(2)
                soup = bs(driver.page_source,'lxml')
                name = soup.find('tbody', {"id":"category_content"})
                for i in name:
                print(i.find('a').text)






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 22 at 21:06

























                answered Nov 22 at 20:59









                Deskom88

                83




                83






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53435458%2fwebscraping-on-beautifulsoup-and-git-bash-and-transferring-to-a-csv%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Trompette piccolo

                    Slow SSRS Report in dynamic grouping and multiple parameters

                    Simon Yates (cyclisme)