✂️ High performance, multi-threaded image scraper
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Solomon1732 04cdefaa18 Fix docsting typo 3 years ago
build_deb Small fix. 6 years ago
image_scraper Fix docsting typo 3 years ago
images adding images dir 6 years ago
tests Removed unused import SimplePool from tests. 6 years ago
.coveragerc 🐎 Added coverage: 6 years ago
.gitignore Added script to build debian file. 6 years ago
.travis.yml Fixed bug in travis install commands. 6 years ago
CHANGELOG.md Fix broken Markdown headings 4 years ago
CONTRIBUTING.md Included steps to contribute 6 years ago
LICENSE Change license to GPLv3 6 years ago
README.md ADD - min-filesize 4 years ago
README.rst ADD - min-filesize 4 years ago
requirements.txt ⬆️added proccess title 6 years ago
setup.cfg 🏁 PyPI release 2.0.6 6 years ago
setup.py 🐛 KeyboardInterrupt exits cleanly, fixes #52 5 years ago


ImageScraper 📃

A high performance, easy to use, multithreaded command line tool which downloads images from the given webpage.

Build Status Downloads Test Coverage
Build Status PyPi downloads Coverage Status


Click here to see it in action!


tar file:

Grab the latest stable build from - Pip: https://pypi.python.org/pypi/ImageScraper

You can also download using pip:

$ pip install ImageScraper


Note that ImageScraper depends on lxml, requests, setproctitle, and future. If you run into problems in the compilation of lxml through pip, install the libxml2-dev and libxslt-dev packages on your system.


$ image-scraper [OPTIONS] URL

You can also use it in your Python scripts. (Deprecated)

import image_scraper


-h, --help            show this help message and exit
-m MAX_IMAGES, --max-images MAX_IMAGES
                    Limit on number of images
-s SAVE_DIR, --save-dir SAVE_DIR
                    Directory in which images should be saved
-g, --injected        Scrape injected images
--proxy-server PROXY_SERVER
                    Proxy server to use
--min-filesize MIN_FILESIZE
                    Limit on size of image in bytes
--max-filesize MAX_FILESIZE
                    Limit on size of image in bytes
--dump-urls           Print the URLs of the images
--formats [FORMATS [FORMATS ...]]
                    Specify formats in a list without any separator. This
                    argument must be after the URL.
--scrape-reverse      Scrape the images in reverse order
--filename-pattern FILENAME_PATTERN
                    Only scrape images with filenames that match the given
                    regex pattern
--nthreads NTHREADS   The number of threads to use when downloading images.

If you downloaded the tar:

Extract the contents of the tar file.

$ cd ImageScraper/
$ python setup.py install
$ image-scraper --max-images 10 [url to scrape]


Scrape all images

$ image-scraper  ananth.co.in/test.html

Scrape at max 2 images

$ image-scraper -m 2 ananth.co.in/test.html

Scrape only gifs and download to folder ./mygifs

$ image-scraper -s mygifs ananth.co.in/test.html --formats gif


By default, a new folder called "images_" will be created in the working directory, containing all the downloaded images.


Q.)All images were not downloaded?

It could be that the content was injected into the page via JavaScript; this scraper doesn't run JavaScript.


If you want to add features, improve them, or report issues, feel free to send a pull request!!



ImageScraper is to be used education/research purposes only. The authors takes NO responsibility and/or liability for how you choose to use any of the tools/source code/any files provided. By using ImageScraper, you understand that you are AGREEING TO USE AT YOUR OWN RISK.