Detecting Misconfigurations and Accumulating Server Data with Web-Sorrow

Web-Sorrow is a Pearl based scanner that allows you to locate server misconfigurations and collect server information. seeing that this tool is written in Pearl, this may paintings on almost any machine where Perl works. it is a hundred% safe to run this software towards web servers because it's far completely focused on Enumeration and gathering information approximately the target server, and it isn't always designed to be an take advantage of or carry out any harmful assaults.

Detecting Misconfigurations and Accumulating Server Data with Web-Sorrow

here are some of the more most important features of web-sorrow:
  • CMS (Content Management System) detection
  • Port scanning
  • Login page scanning
  • Proxy support
  • Bruteforce (Subdomains, Files, and Directories)
  • Stealth
  • Error bagging
  • Standard set of scans indexing of directories, banner grabbing, language detection, robots.txt, and 200 response testing, thumbs.db scanning, and etc.

    -host [host]     --  Defines host to scan, a list separated by
                         semicolons, type ranges, and
                         1.1.1.* type ranges. You can also use the
                type ranges for domains
    -port [port num] --  Defines port number to use (Default is 80)
    -proxy [ip:port] --  Use an HTTP, HTTPS, or gopher proxy server
    -S          --  Standard set of scans including: agresive directory indexing,
                    Banner grabbing, Language detection, robots.txt,
                    HTTP 200 response testing, Apache user enum, SSL cert,
                    Mobile page testing, sensitive items scanning,
                    thumbs.db scanning, content negotiation, and non port 80
                    HTTP port sweeps
    -auth       --  Scan for login pages, admin consoles, and email webapps
    -Cp [dp | jm | wp | all] scan for cms plugins.
                    dp = drupal, jm = joomla, wp = wordpress
    -Fd         --  Scan for common interesting files and dirs (Bruteforce)
    -Sfd        --  Very small files and dirs enum (for the sake of time)
    -Sd         --  BruteForce Subdomains (host given must be a domain. Not an IP)
    -Ws         --  Scan for Web Services on host such as: cms version info,
                    blogging services, favicon fingerprints, and hosting provider
    -Db         --  BruteForce Directories with the big dirbuster Database
    -Df [option]    Scan for default files. platfroms/options: Apache,
                    Frontpage, IIS, Oracle9i, Weblogic, Websphere,
                    MicrosoftCGI, all (enables all)
    -ninja      --  A light weight and undetectable scan that uses bits and
                    peices from other scans (it is not recomended to use with any
                    other scans if you want to be stealthy. See readme.txt)
    -fuzzsd     --  Fuzz every found file for Source Disclosure
    -e          --  Everything. run all scans
    -intense    --  like -e but no bruteforce
    -I          --  Passively scan interesting strings in responses such as:
                    emails, wordpress dirs, cgi dirs, SSI, facebook fbids,
                    and much more (results may Contain partial html)
    -dp         --  Do passive tests on requests: banner grabbing, Dir indexing,
                    Non 200 http status, strings in error pages,
                    Passive Web services
    -flag [txt] --  report when this text shows up on the responces.
    -ua [ua] --  Useragent to use. put it in quotes. (default is firefox linux)
    -Rua     --  Generate a new random UserAgent per request
    -R       --  Only request HTTP headers via ranges requests.
                 This is much faster but some features and capabilitises
                 May not work with this option. But it's perfect when
                 You only want to know if something exists or not.
                 Like in -auth or -Fd
    -gzip    --  Compresses http responces from host for speed. Some Banner
                 Grabbing will not work
    -d [dir] --  Only scan within this directory
    -https   --  Use https (ssl) instead of http
    -nr      --  Don't do responce analisis IE. False positive testing,
                 Iteresting headers (other than banner grabbing) if
                 you want your scan to be less verbose use -nr
    -Shadow  --  Request pages from Google cache instead of from the Host.
                 (mostly for just -I otherwise it's unreliable)
    -die     --  Stop scanning host if it appears to be offline
    -reject  --  Treat this http status code as a 404 error
perl -host -S
perl -host -Fd -fuzzsd
perl -host -Cp dp,jm -ua "script w/ the munchies"
perl -host -d /wordpress -Cp wp
perl -host -port 8080 -proxy -S -Ws -I

  • The -ninja doesn't make other scans stealthy. it's far itself a experiment that makes use of very few requests.
  • while the use of -Cp you may experiment more than one or unmarried cms plugins. as an instance: -Cp wp,dp or -Cp wp;dp. also, it would not depend what you separate the options with.
  • while the usage of -ua, you should use rates if it consists of whitespace.
  • if you use -e with different scans, it will run it twice.
  • To log outcomes to a report: perl -host -S -I >logfile.txt.

Popular Posts