How can I get html content written by JavaScript with Selenium/Python [duplicate]

How can I get html content written by JavaScript with Selenium/Python [duplicate]

I’m doing web-crawling with Selenium and I want to get an element(such as a link) written by JavaScript after Selenium simulating clicking on a fake link.

I tried get_html_source(), but it doesn’t include the content written by JavaScript.

Code I’ve written:

    def test_comment_url_fetch(self):
        sel = self.selenium"/rmrb")
        url = sel.get_location()
        #print url
        if url.startswith('http://login'):
        i = 1
        while True:
                if i == 1:
          "//div[@class='WB_feed_type SW_fun S_line2']/div/div/div[3]/div/a[4]") 
                    print "click"
                    XPath = "//div[@class='WB_feed_type SW_fun S_line2'][%d]/div/div/div[3]/div/a[4]"%i
                    print "click"
            except Exception, e:
                print e
            i += 1
        html = sel.get_html_source()
        html_file = open("tmp\foo.html", 'w')

I use a while-loop to click a series of fake links which trigger js-actions to show extra content, and that content is what I want. But sel.get_html_source() didn’t give what I want.

Anybody may help? Thanks a lot.

Since I usually do post-processing on the fetched nodes I run JavaScript directly in the browser with execute_script. For example to get all a-tags:

js_code = "return document.getElementsByTagName('a')"
your_elements = sel.execute_script(js_code)

Edit: execute_script and get_eval are equivalent except that get_eval performs an implicit return, in execute_script it has to be stated explicitly.