spayee/graphy course
Webpage has a sidebar with category and sub-category and each opens just a PDF.
PDF files are stored here - https://randomlettersandnumbers.cloudfront.net/w/o/randomLettersAndNumbers/v/randomLettersAndNumbers/u/randomLettersAndNumbers/p/assets/pdfs/2021/01/13/randomLettersAndNumbers/file.pdf
can you share the url? in the worst case you’ll need to write a custom crawler that works by automating a web browser.
www . acadboost . com/courses/11th-JEE-MainAdvanced-Notes
ok, i figured out how to download :) i used firefox devtool, might be slightly different on chromium.
after going to a pdf page (like this), open devtools on the network tab, select XHR, filter with the string
/preview/urland refresh. you’ll get one item that contains ‘url’ and ‘p’. as you also experienced the pdf is password protected.now they have a JS function defined named
parseJData, you can use it likeparseJData(p, !0)where p is the p value from the xhr response e.g.parseJData("9dd1bbb2b96776b603b2666fb3173133x8Y+a7Fx0tdy2ntJSUCmLFQQW+BMJFz+UGUrdSyaNz2FpFx2fSJvzEJ8JdWXGbeH16ac82d92bc66da09f044fe9faebaaa9", !0). That’s your pdf password.you totally can automate this, but there doesn’t seem to be that many PDFs (if you’re only going for that one lecture). I’d just keep the devtool open, check “persist logs” (click option button to find it), browse through all the PDF pages, and save as HAR file and write some one off script to extract all the url and p value.
Which part is password
{"response":true,"url":"https://d2a5xnk4s7n8a6.cloudfront.net/w/o/5de58cc5e4b06eaef9799a5e/v/5eadee8b0cf250d48d95a674/u/69a29c32cf33e87f41f96eb5/p/assets/pdfs/2020/05/02/5eadee8b0cf250d48d95a674/file.pdf","p":"085b4cff79ec580e9687ecf41d77672feDjUk8jWDf2mBGaRtmLWv/bkykxiE4t16pD/ZQJvvuLn1AFNM35N67fA61ORomhx99cbac6aada470ee7d48d35ebc98d09d","allowDownload":false,"allowWatermark":false}Also there is a
allowDownloadHow to make it true ?Why there are 2 sometimes 3 .pdf in network tab when there is only 1 pdf on page ?
sent you a pm, hope it helps
try something in lines of
wget -r -np -k -p "website to archive recursive download"may work, but in case it does not, i would download the the page html, and then filter out all pdf links (some regex or grep magic), and then just give that list to wget or some other file downloader.
if you can give the url, we can get a bit more specific.
Website needs login.
I downloaded some PDFs manually from F12 and they are password protected, how to unlock or get the password ?
in this case try to fetch a list and then fetch your cookies from browser, and use curl and scripting to fetch stuff.
for cookies, you can try to open devtools, and then go to network tab, and there find the pdf file, and then right click, and you will find an option something in lines of ‘copy as/for cURL’, copy that, and paste somewhere. repeat exercise for some other file. this should give you some pattern as for how to make a query. it most likely just needs a bearerauth/token in header cookie, or something alike that.

