Thumbnail article
How to crawl images from google
There are a couple library that offer a feature to crawl an image from google. One of them is called Icrawler. Icrawler is the most popular library in python that provide the ability to crawl an image. To use this feature you need to install the library using the code below.
pip install icrawler
It also provides built-in crawlers for another popular image sites like Flickr and search engines such as Bing and Baidu. The example code if you wanna crawl a cat image after the installation finished is like the code below.
from icrawler.builtin import GoogleImageCrawler

google_crawler = GoogleImageCrawler(storage={'root_dir': 'your_image_dir'})
google_crawler.crawl(keyword='cat', max_num=100)
You can also specify size, color, the license and the date of the image. The code below is the example how to crawl using a filter to specify size, color etc.
from icrawler.builtin import GoogleImageCrawler

google_crawler = GoogleImageCrawler(
    feeder_threads=1,
    parser_threads=2,
    downloader_threads=4,
    storage={'root_dir': 'your_image_dir'})
filters = dict(
    size='large',
    color='orange',
    license='commercial,modify',
    date=((2017, 1, 1), (2017, 11, 30)))
google_crawler.crawl(keyword='cat', filters=filters, max_num=1000, file_idx_offset=0)
Icrawler is a mini framework of web crawlers. With modularization design, it is easy to use and extend. It supports media data like images and videos very well, and can also be applied to texts and other type of files. Scrapy is heavy and powerful, while icrawler is tiny and flexible. With this package, you can write a multiple thread crawler easily by focusing on the contents you want to crawl, keeping away from troublesome problems like exception handling, thread scheduling and communication. A crawler consists of 3 main components (Feeder, Parser and Downloader), they are connected with each other with FIFO queues. Feeder, parser and downloader are all thread pools, so you can specify the number of threads they use. The package work as follows :
  • url_queue stores the url of pages which may contain images
  • task_queue stores the image url as well as any meta data you like, each element in the queue is a dictionary and must contain the field img_url
  • Feeder puts page urls to url_queue
  • Parser requests and parses the page, then extracts the image urls and puts them into task_queue
  • Downloader gets tasks from task_queue and requests the images, then saves them in the given path.
For an intermediate use you can read the documentation here. https://icrawler.readthedocs.io/en/latest/

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *