All Comments
TopTalkedBooks posted at August 20, 2017

You can create a script that scrapes the content of that link. The problem is that you have to maintain that script everytime that the website gets updated.

As the form doesn't have any captcha or a mechanism to prevent automated queries you can setup something easy.

You can make the post request using CURL:

//set POST variables
$url = '';
$fields = array(
    'mrn' => "406691",

//url-ify the data for the POST
foreach($fields as $key=>$value) { $fields_string .= $key.'='.$value.'&'; }
rtrim($fields_string, '&');

//open connection
$ch = curl_init();

//set the url, number of POST vars, POST data
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch,CURLOPT_POST, count($fields));
curl_setopt($ch,CURLOPT_POSTFIELDS, $fields_string);

//execute post
$result = curl_exec($ch);

//close connection

Take a look to the following links:

TopTalkedBooks posted at August 20, 2017

There is a Book "Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL" on this topic - see a review here

PHP-Architect covered it in a well written article in the December 2007 Issue by Matthew Turland

TopTalkedBooks posted at August 20, 2017

Not a tutorial, but I can recommend the book Webbots, Spiders, and Screen Scrapers.

Top Books
We collected top books from hacker news, stack overflow, Reddit, which are recommended by amazing people.