Tag Archives: pdf

View PDFs with Google

Google Docs just keeps getting better and better: it’s long supported OpenDocument, the international standard in data exchange. Now it supports PDF as well:

Upload PDF Files to Google Docs
Update: In less than a day, the feature has been added and you can now upload PDF files, share them and view them online. The PDF viewer is not very advanced, but you can use it to search inside a PDF file, select a block of text (Ctrl C to copy the text) and go to a certain page.

PDF is a great technology, but Adobe Acrobat (the program typically used to view PDFs) is awful. One of the things I miss most about linux is konquerer, the non-Acrobat PDF viewer. Now I have a way on Windows to easily view PDFs withoutAcrobat: Google!

HOWTO: Batch Download a Book in PDF Pages from NetLibrary

NetLibrary is an online book resources that universities or other individuals pay to supply them with virtual copies of books. These books are available online, and can be searched, downloaded, and saved. The catch is that NetLibrary’s interface limits you to viewing (in horribly slow Acrobat reader) one page at a time. Given how unresponsive Acrobat makes many computers, this can make printing out a long book take hours.

Therefore, I took the effort to figure out how to batch download a book from NetLibrary, saving me valuable time.

My solution uses a combination of Firefox and Perl, but other solutions are of course available.

After I loaded up the first true page of the book in the NetLibrary interface, I gave the frame with the PDF its own Window used Firefox’s Tools | Page Info | Media properties dialog box to determine the URL of the embedded PDF file. It turns out it’s a call to a program named nlReader.dll, but it takes a book identification number and page number as arguments:

http://0-www.netlibrary.com.library.unl.edu/nlreader/nlReader.dll?BookID=BOOKIDGOESHERE&FileName=FILENAMEGOESHERE

Obviously, the library.unl.edu part requires my university proxy. For normal pages, the filename was in the format of Page_1.pdf, Page_2.pdf, etc. So I wrote a perlscript to create hyperlinks to pages 1 to 499, saved the output to HTML, used the DownloadThemAll! Firefox extention to get them, and…

Then Acrobat crashed trying to print out those hundreds of PDFs. Boo! Fortunately, Perl came to my rescue… I used ppm to install the module Perl::Reuse, then wrote a script to append all those pdfs into one. The final product is about 500 pages ans 70 megs, but quite easy to store, print out, etc.

Thanks, NetLibrary!