Using Ansible,Jenkins and docker to build fast test environments

I am planning to build test environment using ansible, jenkins and docker together.The plan is like this.

Create ansible playbooks for every tool that you are using in your environment and store them on git.
Using jenkins create job to create docker containers on dev server and use ansible playbooks for provisioning the docker containers.
Jenkins jobs will be created so that user will have option to select playbooks that they want to use with docker containers and containers will be built accordingly.

whole concept can be summarized as shown is below image.


The benefits i see are

Automatic replication of exact production environments.
Scale your test environment as per requirement.
Provide different platforms for application testing on a single server.
Faster integration testing.
Promote agile methodology.
Freedom to develop and customize the test environment.
Developers and testers can create environments on their own even if they don’t know anything regarding OS,configuration.
Test the deployment of the app in a clean environment, a fresh build.

Has any one implemented such type of environment architecture,i would like to discuss the feasibility actual benefits of the same.

  • maven Error resolving version for plugin maven-eclipse-plugin in Jenkins
  • expect utility is not working when executing from jenkins
  • Maven - Separating Deployment & Project
  • Git ls-remote and git update with Crowd authentication
  • With Git feature branch workflow, when do you update the master branch?
  • Writing jenkins plugin: where is the documentation?
  • How to trigger jenkins build when changes are commited to TFS?
  • Jenkins job execution using git push
  • 2 Solutions collect form web for “Using Ansible,Jenkins and docker to build fast test environments”

    I am using a similar but different approach:

    • Define Dockerfiles or chef/puppet/ansible/salt provisioning. As in your approach.

    • Putting those descriptions under version control. As in your approach.

    • Using Jenkins A to CI- and Nightly Building Images and Uploading them into a registry. In order to manage different versions and keeping old images. This introduces a image registry in your diagram.

    • Extending those images with Jenkins-Swarm slaves. This enables ad-hoc deployment in your Jenkins environment.

    Here I separate between building of the software and the building of the build slaves themselves.

    • I deploy a Jenkins B in order to build the software on environments.

    • Now I choose between container that I want to deploy permanently and build containers I want to deploy on demand.

      • Permanent containers, which are heavily used by build jobs, are started as swarm slaves and exchanged daily by nightly builds.
      • Ad-hoc containers are managed by the Jenkins-Docker-Plugin.
      • For more complex environment configurations I use docker-compose to manage ad-hoc availability.
      • For ad-hoc environments, namely VMs for running those configurations, I use docker-swarm.

    enter image description here

    If you want to test with a docker image that has the latest available version of a given package on a given OS, then you need to setup nightly docker image rebuilds. I have a very small, simple project that can get you up and running with nightly docker image rebuilds at I use this to rebuild GCC trunk from source, but you could just as easily use it to install/upgrade packages from a package manager, or deal with any other build/runtime dependency that might be updated upstream and have a potential impact on your project. This way you can catch the issues early, before clients/users encounter them.

    Git Baby is a git and github fan, let's start git clone.