Getting bytes of a webpage from selenium


New member
I am trying to scrape a webpage with a pdf.

With request, I used the following code to get the bytes and save it with open()
  pdf_response = requests.get(pdf_url)
    with open("sample.pdf", 'wb') as f:

And it works just fine,

However on the below webpage I am using selenium but could not get the bytes from response object to use in the above code,

#This does not return a byte object as requests
driver = webdriver.Chrome()

content = driver.page_source.encode('utf-8').strip()

link to pdf (this has captcha that I solve with 2captcha)

Current response that I recieve


New member
I can get PDF using only requests

Only problem: I use pillow to generate image with full code and display it, and I have to manually recognize this code. But if you have some method to recognize it automatically then it is not problem.
import requests
import lxml.html
from PIL import Image
import io

headers = {
    'User-Agent': 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:80.0) Gecko/20100101 Firefox/80.0',

# --- create Session ---

s = requests.Session()

# --- load main page ---

url = ''  # JSON

r = s.get(url)

# --- get images ---

soup = lxml.html.fromstring(r.text)

image_urls = soup.xpath('//img/@src')

# --- generate one image ---

full_image ='RGB', (40*5, 50))

for i, url in enumerate(image_urls):
    r = s.get('' + url)
    image =
    full_image.paste(image, (40*i, 0))

# --- ask for code ---

code = input('code> ')

#print('code:', code)

# --- get PDF ---

r ='', data={'code': code})

if r.headers['Content-Type'] != 'application/pdf':
    print('It is not PDF file')
    with open('output.pdf', 'wb') as fh:
        print('size:', fh.write(r.content))