Beside
scraping webpages you can write code to do monotonous tasks
periodically. For example, with few lines of code you can easily clear
your /tmp
directory,
check for new mails/news/cricket score after every 15
minutes or maybe upvote,like or comment on the every posts of your
crush/gf/bf on Facebook without actually surfing the Facebook.
It
all depends on what your requirements are. I'll tell you how Python
helped me. Like you I am also a proud Linux user. But, sometimes issues
pop up. In my case I wasn't able to get the battery status info from the
status bar no matter what. I searched on Stackoverflow but couldn't
find anything satisfactory. Then I made Balert ( Battery alert :P ) . It's a console application which basically reads the battery charge status from
/sys/class/power_supplyafter every 5 minutes and gives a voice notification ( Something like "Hey dude, I need to be charged :P ") as a warning if your charge level goes below 15%. No jokes, you can actually set a custom notification message. You can also decide when you want to be warned ( the % charge level ).
So this is what Balert does :
- Creates a cron job if not already done before.
- Read the battery info at regular interval. This is where cron comes in picture.
- If the battery status is low then warn the user. Simple stuff isn't it?
For text to speech part i've used Pyttx package. You can see the implementation in details at Github.
one more example:
When you select a movie file in file browser you want to search IMDB rating of that movie.
First
step is to write a python script to fetch the imdb page and parse the
rating. This is typically done by a simple GET request. For example you
can search for star wars by sending a GET request to http://www.imdb.com/find? q=star+....
I would typically use urllib2 library for this. For some complex pages
you may want to use Mechanize Browser or if your page has Javascript you
want to execute custom JS commands you may want to use Selenium +
PhantomJS.
So once we fetch the page we can
parse it using BeautifulSoup. This is simple HTML parsing. If you know
HTML DOM you should know how to select elements and search for nodes.
You can simply select the first result and then again send a GET request
to fetch rating and parse the movie page to get the actual rating.
Next step is to show results. I would use libnotify or similar desktop notification libraries to show the results.
The
last step is to attach a trigger for the script i.e., when it should
execute. You can use cron to schedule your script execution. Or you add
some commands to bashrc or you can use this New Empty File Tutorial to make contextual menu item. It depends on script
You have provided a nice article, Thank you very much for this one. And I hope this will be useful for many people. And I am waiting for your next post keep on updating these kinds of knowledgeable things
ReplyDeletexamarin training
xamarin course