Spring Boot Docker 指南 (QFY译版)

许多人正在使用容器封装他们的 Spring Boot 应用程序,而构建容器并不是一件简单的事情。这是面向 Spring Boot 应用程序开发人员的指南,容器对开发人员来说并不总是一个好的抽象——它们迫使您了解和思考非常低级的问题——但有时您会被要求创建或使用容器,因此理解如何构建是值得的。在这里,我们旨在向您展示一些可选项,如果您将要创建自己的容器,您可以作出选择。 我们假设您知道如何创建和构建一个基本的 Spring Boot 应用程序。不知道的话,找一个 入门指南 看一下,如 REST Service。从那里复制代码并用下面的一些想法练习。 NOTE: 还有一个入门指南 Docker, 也是一个很好的起点,但它没有涵盖我们在这里的选择范围,也没有那么详细。

基本的Dockerfile

Spring Boot 应用程序很容易转换成可执行的JAR文件。所有 入门指南 以及从 Spring Initializr 下载的每个应用程序都有一个构建步骤来创建一个可执行的JAR。Maven 可以用 ./mvnw install , Gradle 用 ./gradlew build . 在项目根目录,运行该JAR的基本 Dockerfile 如下:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

JAR_FILE 作为 docker 命令的一部分传入(Maven 跟 Gradle 会有些不同)。

Maven:

$ docker build --build-arg JAR_FILE=target/*.jar -t myorg/myapp .

Gradle:

$ docker build --build-arg JAR_FILE=build/libs/*.jar -t myorg/myapp

当然,一旦你选择了一个构建系统,可以不需要 ARG ——你可以硬编码 jar 的位置。

如Maven:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

然后我们简单的构建 image

$ docker build -t myorg/myapp .

然后像这样运行:

$ docker run -p 8080:8080 myorg/myapp
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.2.RELEASE)
Nov 06, 2018 2:45:16 PM org.springframework.boot.StartupInfoLogger logStarting
INFO: Starting Application v0.1.0 on b8469cdc9b87 with PID 1 (/app.jar started by root in /)
Nov 06, 2018 2:45:16 PM org.springframework.boot.SpringApplication logStartupProfileInfo
...

如果您想探索下 image 内部,可以像这样打开其中的shell(基本 image 没有 bash ):

$ docker run -ti --entrypoint /bin/sh myorg/myapp
/ # ls
app.jar  dev      home     media    proc     run      srv      tmp      var
bin      etc      lib      mnt      root     sbin     sys      usr
/ #
Note
我们在示例中使用的 alpine 基本容器没有 bash ,因此这是一个 ash shell。它只有 bash 部分功能,不完整。 如果你有一个正在运行的容器,你想窥视它,使用 docker exec 你可以这样做:
$ docker run --name myapp -ti --entrypoint /bin/sh myorg/myapp
$ docker exec -ti myapp /bin/sh
/ #

其中 myapp 是传递给 docker run 命令的 --name 。如果您没有使用 --name ,docker会指定一个助记名,您可以从 docker ps 的输出中找找这个助记名。您还可以使用 docker ps 中出现的容器 sha 标识符,而不是名称。

入口 (The Entry Point)

此 Dockerfile 是使用 ENTRYPOINT 来执行的,所以 java 进程不是包含在 shell 中其优点是java进程将响应发送到容器的 KILL 信号。实际上,这意味着,例如,如果在本地 docker run 运行 image,则可以使用 CTRL-C 停止。如果命令行有点长,可以将其提取成 shell 脚本中,并在运行之前将其 COPY 到图像中。如:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY run.sh .
COPY target/*.jar app.jar
ENTRYPOINT ["run.sh"]

记住使用 exec java …​ 启动 java 进程(以便它可以处理 KILL 信号): .run.sh

#!/bin/sh
exec java -jar /app.jar

entry point 另一个有趣的点是,可以在运行时将环境变量注入 java 进程。例如,假设您想要在运行时添加 java 命令行选项。您可以尝试执行以下操作:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","${JAVA_OPTS}","-jar","/app.jar"]

and

$ docker build -t myorg/myapp .
$ docker run -p 9000:9000 -e JAVA_OPTS=-Dserver.port=9000 myorg/myapp

这将失败,因为替换 ${} 需要在 shell 中;因未使用 shell 方式启动进程,因此不会应用这些选项。通过将入口点移动到脚本(如上文中的 run.sh ) , 或者通过在入口点显式创建shell。例如:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]

然后可以用来运行 app :

$ docker run -p 8080:8080 -e "JAVA_OPTS=-Ddebug -Xmx128m" myorg/myapp
...
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.2.0.RELEASE)
...
2019-10-29 09:12:12.169 DEBUG 1 --- [           main] ConditionEvaluationReportLoggingListener :
============================
CONDITIONS EVALUATION REPORT
============================
...

(Showing parts of the full DEBUG output that is generated with -Ddebug by Spring Boot.) 如上例中使用带 shell 的 ENTRYPOINT 意味着您可以将环境变量传递到 java 命令中,但到目前为止,您还不能为 Spring Boot 应用程序提供命令行参数。此技巧不适用于在端口9000上运行应用程序:

$ docker run -p 9000:9000 myorg/myapp --server.port=9000
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.2.0.RELEASE)
...
2019-10-29 09:20:19.718  INFO 1 --- [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port(s): 8080

它不起作用的原因是 docker 命令 ( --server.port=9000 部分) 被 entry point (sh) ,而不是它启动的 java 进程。To fix that you need to add the command line from the CMD to the ENTRYPOINT:

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar ${0} ${@}"]
$ docker run -p 9000:9000 myorg/myapp --server.port=9000
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.2.0.RELEASE)
...
2019-10-29 09:30:19.751  INFO 1 --- [           main] o.s.b.web.embedded.netty.NettyWebServer  : Netty started on port(s): 9000

注意,使用 ${0} 表示“命令”(在本例中是第一个程序参数),使用 ${@} 表示“命令参数”(其余程序参数)。如果使用脚本作为入口点,则不需要 ${0}(在上面的例子中 /app/run.sh )。例如:

run.sh
#!/bin/sh
exec java ${JAVA_OPTS} -jar /app.jar ${@}

到目前为止,docker的配置非常简单,生成的 image 效率也不是很高。docker image 只有一个文件系统层,其中包含 fat jar,我们对应用程序代码所做的每一次更改都会更改该层,该层可能 10MB或更大(对于某些应用程序甚至高达50MB)。我们可以通过将 JAR 分成多个层来改进它。

较小的 images(映像)

注意上面例子中的基础 image 是 openjdk:8-jdk-alpine .alpineDockerhub 中标准的 openjdk image 小。Java 11 还没有官方的 alpine image (AdoptOpenJDK有一段时间有一个映像,但现在 Dockerhub page 看不到了) 。通过使用 “jre” 标签而不是“jdk”,您还可以在基本 image 中节省大约20MB。并不是所有的应用程序都使用JRE(而不是JDK),但大多数应用程序都使用JRE,事实上一些组织强制要求用 JRE 避免滥用JDK某些功能(如编译)的风险。 另一个可以让 image 更小的技巧是用 OpenJDK 11 中的 JLink 。JLink 允许您从完整JDK中的模块子集构建自定义JRE发行版,因此您不需要在基本 image 中使用标准JRE或JDK。理论上,总 image 大小会比用 openjdk 官方docker image 更小。In practice, you won’t (yet) be able to use the alpine base image with JDK 11, so your choice of base image will be limited and will probably result in a larger final image size. 此外,您自己定制的 JRE 不能与其他应用程序共享,因为可能各自需要不同的定制。So you might have smaller images for all your applications, but they still take longer to start because they don’t benefit from caching the JRE layer. 最后一点强调了 image 构建者非常重要的一点:目标不一定总是尽可能地构建最小的图像。较小的图像通常是一个好主意,在没有缓存层的情况下上传和下载所需的时间较少。现在,image 注册非常复杂,如果您试图巧妙地使用图像构造,您很容易失去这些功能的好处。如果您使用公共基础层,那么映像的总大小就不那么令人担心了,而且随着注册中心和平台的发展,它可能会变得更少。尽管如此,尝试和优化应用程序映像中的层仍然很重要,也很有用,但目标应该始终是将变化最快的内容放在最高层,并尽可能多地与其他应用程序共享较大、较低的层。

好一点的 Dockerfile

由于 jar 本身就是打包,Spring Boot fat jar 自然有“层”。如果我们先将其解包,它将被分为外部和内部依赖项。To do this in one step in the docker build, we need to unpack the jar first. 例如(此处使用Maven,但是Gradle版本非常相似):

$ mkdir target/dependency
$ (cd target/dependency; jar -xf ../*.jar)
$ docker build -t myorg/myapp .

有了这个Dockerfile

Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

现在有3层,所有应用程序资源都在后面的2层中。如果应用程序依赖关系没有改变,那么第一层(从 BOOT-INF/lib 开始)不会改变,因此构建会更快,并且只要基本层已经缓存,容器在运行时的启动也会更快。

Note
我们硬编码了主应用程序类 hello.Application .你的程序可能会有所不同。如果需要,可以用另一个 ARG 参数化它。您还可以将Spring Boot fat JarLauncher 复制到映像中用它运行应用程序 ——它可以工作,您不需要指定主类,但在启动时会慢一些。

调优

如果你想加速你的应用程序启动(大多数人都这么做),你可能会考虑一些调整。以下是一些想法:

  • 使用 spring-context-indexer (参考文档). 它不会为小型应用程序提升太多,但都有帮助。

  • 不使用 actuators 如果你负担不起 .

  • 使用 Spring Boot 2.1 和 Spring 5.1.

  • 使用 spring.config.location 明确 Spring Boot 配置文件 (命令行参数或系统属性等方式).

  • 关闭JMX-您可能不需要在容器中使用它 - 用 spring.jmx.enabled=false

  • 使用 -noverify 运行JVM。还可以考虑 -XX:TieredStopAtLevel=1(这将降低JIT的速度,但会节省的启动时间)。

  • Java 8 中使用容器内存提示: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap 。Java 11 中,默认有。

应用程序在运行时可能不需要完全占用CPU,但它需要多个CPU才能尽快启动(至少2、4个更好)。如果你不介意启动速度慢一点,你可以把CPU控制在4以下。如果您被迫在少于4个CPU启动,那么设置 -Dspring.backgroundpreinitializer.ignore=true 它会阻止 Spring Boot 创建一个可能无法使用的新线程(这适用于Spring Boot 2.1.0及更高版本)。

多阶段构建

上面的 Dockerfile 假设已经通过命令行构建 fat JAR 。您也可以在 docker 中使用多阶段构建来完成这一步骤,将结果从一个 image 复制到另一个 image 。例如,使用Maven:

Dockerfile
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

第一个映像被标为 "build",它用于运行Maven并构建 fat jar,然后将其解包。解包也可以由Maven或Gradle完成(这是《入门指南》中采用的方法)- 实际上没有太大区别,只是需要编辑构建配置(如 pom.xml )并添加插件。 请注意,源代码被分成4层。后面的层包含应用程序的构建配置和源代码,前面的层包含构建系统本身(Maven包装器)。这是一个小的优化,而且也意味着我们不必将 target 目录复制到 docker image,甚至不必复制用于构建的临时映像。 因为必须在第一个 RUN 断中重新创建 Maven 缓存,所以更改源代码时每次构建都会很慢。但是这样有一个完全独立的构建,任何人只要有docker都可以运行它来运行你的应用程序。这在某些环境中非常有用,例如需要与不懂Java的人共享代码时。

实验性功能

Docker 18.06 附带了一些 "实验性" 功能,包括缓存生成依赖项的方法. 要打开它们,您需要在守护进程 (dockerd) 中添加一个参数,并且在运行客户端时还需要一个环境变量,然后您可以在您的 Dockerfile 中添加神奇的第一行:

Dockerfile
# syntax=docker/dockerfile:experimental

然后, RUN 指令接受一个新的参数 --mount.下面是一个完整的例子:

Dockerfile
# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app
COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src
RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

然后运行它:

$ DOCKER_BUILDKIT=1 docker build -t myorg/myapp .
...
 => /bin/sh -c ./mvnw install -DskipTests              5.7s
 => exporting to image                                 0.0s
 => => exporting layers                                0.0s
 => => writing image sha256:3defa...
 => => naming to docker.io/myorg/myapp

通过实验功能,您可能在控制台上看到稍有不同,但是您可以看到,一旦缓存变热,Maven构建现在只需要几秒钟而不是几分钟。 Gradle版本的 Dockerfile 配置也非常类似:

Dockerfile
# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine AS build
WORKDIR /workspace/app
COPY . /workspace/app
RUN --mount=type=cache,target=/root/.gradle ./gradlew clean build
RUN mkdir -p build/dependency && (cd build/dependency; jar -xf ../libs/*.jar)
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/build/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]
Note
当这些功能处于实验阶段时,打开和关闭 buildkit 的选项取决于您使用的 docker 版本。查看你对应版本的文档(上面的例子对于Docker18.0.6来说是没问题的)。

安全方面

正如在经典的VM部署中一样,进程不应该使用 root 权限运行。因此 image 应包含运行应用程序的 root 用户。 Dockerfile 中,这可以通过添加另一个增加(系统)用户和组的层来实现,然后将其设置为当前用户(替代默认的 root 用户):

Dockerfile
FROM openjdk:8-jdk-alpine
RUN addgroup -S demo && adduser -S demo -G demo
USER demo

如果有人试图从你的应用程序中跳出并在容器中运行系统命令,这将限制他们的能力(最小权限原则)。 NOTE: 一些后续 Dockerfile 命令需要 root 账户执行,因此您可能需要向下移动 USER 命令(例如,如果您计划在容器中安装更多的包,root 身份才行)。 NOTE: 也可以不使用 Dockerfile 可能更容易。例如,在后面描述的 buildpack 方法中,大多数实现默认将使用非 root 用户。 另一个需要考虑的问题是,大多数应用程序在运行时可能不需要完整的JDK,因此一旦我们有了多阶段的构建,就可以安全地切换到JRE基本映像。所以在上面的多阶段构建中,我们可以使用

Dockerfile
FROM openjdk:8-jre-alpine

最后, 可运行的图像。上面已经提到过,这也会在 image 中节省一些空间,这些空间是由运行时不需要的工具占用了。

Build Plugins

If you don’t want to call docker directly in your build, there is quite a rich set of plugins for Maven and Gradle that can do that work for you. Here are just a few.

Spring Boot Plugins

With Spring Boot 2.3 you have the option of building an image from Maven or Gradle directly with Spring Boot. As long as you are already building a Spring Boot jar file, you only need to call the plugin directly. With Maven:

$ ./mvnw spring-boot:build-image

and with Gradle

$ ./gradlew bootBuildImage

It uses the local docker daemon (which therefore must be installed) but doesn’t require a Dockerfile . The result is an image called docker.io/<group>/<artifact>:latest by default. You can modify the image name in Maven using

<project>
	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
				<configuration>
					
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

and in Gradle using

bootBuildImage {
	builder = "myorg/demo"
}

The image is built using Cloud Native Buildpacks , where the default builder is optimized for a Spring Boot application (you can customize it but the defaults are useful). The image is layered efficiently, like in the examples above. It also uses the CF memory calculator to size the JVM at runtime based on the container resources available, so when you run the image you will see the memory calculator reporting its results:

$ docker run -p 8080:8080 myorg/demo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=86557K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx450018K (Head Room: 0%, Loaded Class Count: 12868, Thread Count: 250, Total Memory: 1073741824)
 ...

Spotify Maven Plugin

The Spotify Maven Plugin is a popular choice. It requires the application developer to write a Dockerfile and then runs docker for you, just as if you were doing it on the command line. There are some configuration options for the docker image tag and other stuff, but it keeps the docker knowledge in your application concentrated in a Dockerfile, which many people like. For really basic usage it will work out of the box with no extra configuration:

$ mvn com.spotify:dockerfile-maven-plugin:build
...
[INFO] Building Docker context /home/dsyer/dev/demo/workspace/myapp
[INFO]
[INFO] Image will be built without a name
[INFO]
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.630 s
[INFO] Finished at: 2018-11-06T16:03:16+00:00
[INFO] Final Memory: 26M/595M
[INFO] ------------------------------------------------------------------------

That builds an anonymous docker image. We can tag it with docker on the command line now, or use Maven configuration to set it as the repository. Example (without changing the pom.xml):

$ mvn com.spotify:dockerfile-maven-plugin:build -Ddockerfile.repository=myorg/myapp

Or in the pom.xml: .pom.xml

<build>
    <plugins>
        <plugin>
            <groupId>com.spotify</groupId>
            <artifactId>dockerfile-maven-plugin</artifactId>
            <version>1.4.8</version>
            <configuration>
                <repository>myorg/${project.artifactId}</repository>
            </configuration>
        </plugin>
    </plugins>
</build>

Palantir Gradle Plugin

The Palantir Gradle Plugin works with a Dockerfile and it also is able to generate a Dockerfile for you, and then it runs docker as if you were running it on the command line. First you need to import the plugin into your build.gradle: .build.gradle

buildscript {
    ...
    dependencies {
        ...
        classpath('gradle.plugin.com.palantir.gradle.docker:gradle-docker:0.13.0')
    }
}

and then finally you apply the plugin and call its task: .build.gradle

apply plugin: 'com.palantir.docker'
group = 'myorg'
bootJar {
    baseName = 'myapp'
    version =  '0.1.0'
}
task unpack(type: Copy) {
    dependsOn bootJar
    from(zipTree(tasks.bootJar.outputs.files.singleFile))
    into("build/dependency")
}
docker {
    name "${project.group}/${bootJar.baseName}"
    copySpec.from(tasks.unpack.outputs).into("dependency")
    buildArgs(['DEPENDENCY': "dependency"])
}

In this example we have chosen to unpack the Spring Boot fat jar in a specific location in the build directory, which is the root for the docker build. Then the multi-layer (not multi-stage) Dockerfile from above will work.

Spring Boot Maven and Gradle Plugins

Spring Boot build plugins for Maven and Gradle can be used to create container images. The plugins create an OCI image (the same format as one created by docker build) using Cloud Native Buildpacks . You don’t need a Dockerfile but you do need a docker daemon, either locally (which is what you use when you build with docker) or remotely via DOCKER_HOST environment variable. Example with Maven (without changing the pom.xml):

$ ./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=myorg/myapp

and with Gradle:

$ ./gradlew bootBuildImage --imageName=myorg/myapp

The first build might take a long time because it has to download some container images, and the JDK, but subsequent builds will be fast. If you run the image:

$ docker run -p 8080:8080 -t myorg/myapp
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=86381K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx450194K (Head Room: 0%, Loaded Class Count: 12837, Thread Count: 250, Total Memory: 1073741824)
 ....
2015-03-31 13:25:48.035  INFO 1 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2015-03-31 13:25:48.037  INFO 1 --- [           main] hello.Application

you will see it start up as normal. You might also notice that the JVM memory requirements were computed and set as command line options inside the container. This is the same memory calculation that has been in use in Cloud Foundry build packs for many years. It represents significant research into the best choices for a range of JVM applications, including but not limited to Spring Boot applications, and the results are usually much better than the default setting from the JVM. You can customize the command line options and override the memory calculator using environment variables.

Jib Maven and Gradle Plugins

Google has an open source tool called Jib that is relatively new, but quite interesting for a number of reasons. Probably the most interesting thing is that you don’t need docker to run it - it builds the image using the same standard output as you get from docker build but doesn’t use docker unless you ask it to - so it works in environments where docker is not installed (not uncommon in build servers). You also don’t need a Dockerfile (it would be ignored anyway), or anything in your pom.xml to get an image built in Maven (Gradle would require you to at least install the plugin in build.gradle). Another interesting feature of Jib is that it is opinionated about layers, and it optimizes them in a slightly different way than the multi-layer Dockerfile created above. Just like in the fat jar, Jib separates local application resources from dependencies, but it goes a step further and also puts snapshot dependencies into a separate layer, since they are more likely to change. There are configuration options for customizing the layout further. Example with Maven (without changing the pom.xml):

$ mvn com.google.cloud.tools:jib-maven-plugin:build -Dimage=myorg/myapp

To run the above command you will need to have permission to push to Dockerhub under the myorg repository prefix. If you have authenticated with docker on the command line, that will work from your local ~/.docker configuration. You can also set up a Maven "server" authentication in your ~/.m2/settings.xml (the id of the repository is significant): .settings.xml

    <server>
      <id>registry.hub.docker.com</id>
      <username>myorg</username>
      <password>...</password>
    </server>

There are other options, e.g. you can build locally against a docker daemon (like running docker on the command line), using the dockerBuild goal instead of build. Other container registries are also supported and for each one you will need to set up local authentication via docker or Maven settings. The gradle plugin has similar features, once you have it in your build.gradle, e.g. .build.gradle

plugins {
  ...
  id 'com.google.cloud.tools.jib' version '1.8.0'
}

or in the older style used in the Getting Started Guides: .build.gradle

buildscript {
    repositories {
      maven {
        url "https://plugins.gradle.org/m2/"
      }
      mavenCentral()
    }
    dependencies {
        classpath('org.springframework.boot:spring-boot-gradle-plugin:2.2.1.RELEASE')
        classpath('com.google.cloud.tools.jib:com.google.cloud.tools.jib.gradle.plugin:1.8.0')
    }
}

and then you can build an image with

$ ./gradlew jib --image=myorg/myapp

As with the Maven build, if you have authenticated with docker on the command line, the image push will authenticate from your local ~/.docker configuration.

Continuous Integration

Automation is part of every application lifecycle these days (or should be). The tools that people use to do the automation tend to be quite good at just invoking the build system from the source code. So if that gets you a docker image, and the environment in the build agents is sufficiently aligned with developer’s own environment, that might be good enough. Authenticating to the docker registry is likely to be the biggest challenge, but there are features in all the automation tools to help with that. However, sometimes it is better to leave container creation completely to an automation layer, in which case the user’s code might not need to be polluted. Container creation is tricky, and developers sometimes don’t really care about it. If the user code is cleaner there is more chance that a different tool can "do the right thing", applying security fixes, optimizing caches etc. There are multiple options for automation and they will all come with some features related to containers these days. We are just going to look at a couple. Concourse Concourse is a pipeline-based automation platform that can be used for CI and CD. It is heavily used inside Pivotal and the main authors of the project work there. Everything in Concourse is stateless and everything runs in a container, except the CLI. Since running containers is the main order of business for the automation pipelines, creating containers is well supported. The Docker Image Resource is responsible for keeping the output state of your build up to date, if it is a container image. Here’s an example pipeline that builds a docker image for the sample above, assuming it is in github at myorg/myapp and has a Dockerfile at the root and a build task declaration in src/main/ci/build.yml:

resources:

- name: myapp
  type: git
  source:
    uri: https://github.com/myorg/myapp.git

- name: myapp-image
  type: docker-image
  source:
    email: {{docker-hub-email}}
    username: {{docker-hub-username}}
    password: {{docker-hub-password}}
    repository: myorg/myapp
jobs:

- name: main
  plan:
  - task: build
    file: myapp/src/main/ci/build.yml
  - put: myapp-image
    params:
      build: myapp

The structure of a pipeline is very declarative: you define "resources" (which are either input or output or both), and "jobs" (which use and apply actions to resources). If any of the input resources changes a new build is triggered. If any of the output resources changes during a job, then it is updated. The pipeline could be defined in a different place than the application source code. And for a generic build setup the task declarations could be centralized or externalized as well. This allows some separation of concerns between development and automation, if that’s the way you roll. Jenkins Jenkins is another popular automation server. It has a huge range of features, but one that is the closest to the other automation samples here is the pipeline feature. Here’s a Jenkinsfile that builds a Spring Boot project with Maven and then uses a Dockerfile to build an image and push it to a repository: .Jenkinsfile

node {
    checkout scm
    sh './mvnw -B -DskipTests clean package'
    docker.build("myorg/myapp").push()
}

For a (realistic) docker repository that needs authentication in the build server, you can add credentials to the docker object above using docker.withCredentials(…​).

Buildpacks

Note
The Spring Boot Maven and Gradle plugins use buildpacks in exactly the same way that the pack CLI does in the examples below. The main difference is that the plugins use docker to run the builds, whereas pack doesn’t need to. The resulting images are identical given the same inputs. Cloud Foundry has used containers internally for many years now, and part of the technology used to transform user code into containers is Build Packs, an idea originally borrowed from Heroku. The current generation of buildpacks (v2) generates generic binary output that is assembled into a container by the platform. The new generation of buildpacks (v3) is a collaboration between Heroku and other companies including Pivotal, and it builds container images directly and explicitly. This is very interesting for developers and operators. Developers don’t need to care so much about the details of how to build a container, but they can easily create one if they need to. Buildpacks also have lots of features for caching build results and dependencies, so often a buildpack will run much quicker than a native docker build. Operators can scan the containers to audit their contents and transform them to patch them for security updates. And you can run the buildpacks locally (e.g. on a developer machine, or in a CI service), or in a platform like Cloud Foundry. The output from a buildpack lifecycle is a container image, but you don’t need docker or a Dockerfile, so it’s CI and automation friendly. The filesystem layers in the output image are controlled by the buildpack, and typically many optimizations will be made without the developer having to know or care about them. There is also an Application Binary Interface between the lower level layers, like the base image containing the operating system, and the upper layers, containing middleware and language specific dependencies. This makes it possible for a platform, like Cloud Foundry, to patch lower layers if there are security updates without affecting the integrity and functionality of the application. To give you an idea of the features of a buildpack here is an example using the Pack CLI from the command line (it would work with the sample app we have been using in thus guide, no need for a Dockerfile or any special build configuration):
$ pack build myorg/myapp --builder=cloudfoundry/cnb:bionic --path=.
2018/11/07 09:54:48 Pulling builder image 'cloudfoundry/cnb:bionic' (use --no-pull flag to skip this step)
2018/11/07 09:54:49 Selected run image 'packs/run' from stack 'io.buildpacks.stacks.bionic'
2018/11/07 09:54:49 Pulling run image 'packs/run' (use --no-pull flag to skip this step)

*** DETECTING:
2018/11/07 09:54:52 Group: Cloud Foundry OpenJDK Buildpack: pass | Cloud Foundry Build System Buildpack: pass | Cloud Foundry JVM Application Buildpack: pass

*** ANALYZING: Reading information from previous image for possible re-use

*** BUILDING:

-----> Cloud Foundry OpenJDK Buildpack 1.0.0-BUILD-SNAPSHOT

-----> OpenJDK JDK 1.8.192: Reusing cached dependency

-----> OpenJDK JRE 1.8.192: Reusing cached launch layer

-----> Cloud Foundry Build System Buildpack 1.0.0-BUILD-SNAPSHOT

-----> Using Maven wrapper
       Linking Maven Cache to /home/pack/.m2

-----> Building application
       Running /workspace/app/mvnw -Dmaven.test.skip=true package
...

---> Running in e6c4a94240c2

---> 4f3a96a4f38c

---> 4f3a96a4f38c
Successfully built 4f3a96a4f38c
Successfully tagged myorg/myapp:latest
$ docker run -p 8080:8080 myorg/myapp
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.3.0.RELEASE)
2018-11-07 09:41:06.390  INFO 1 --- [main] hello.Application: Starting Application on 1989fb9a00a4 with PID 1 (/workspace/app/BOOT-INF/classes started by pack in /workspace/app)
 ...

The --builder is a docker image that runs the buildpack lifecycle - typically it would be a shared resource for all developers, or all developers on a single platform. You can set the default builder on the command line (creates a file in ~/.pack) and then omit that flag from subsequent builds. NOTE: The cloudfoundry/cnb:bionic builder also knows how to build an image from an executable jar file, so you can build using mvnw first and then point the --path to the jar file for the same result.

Knative

Another new project in the container and platform space is Knative. Knative is a lot of things, but if you are not familiar with it you can think of it as a building block for building a serverless platform. It is built on Kubernetes so ultimately it consumes container images, and turns them into applications or "services" on the platform. One of the main features it has, though, is the ability to consume source code and build the container for you, making it more developer and operator friendly. Knative Build is the component that does this and is itself a flexible platform for transforming user code into containers - you can do it in pretty much any way you like. Some templates are provided with common patterns like Maven and Gradle builds, and multi-stage docker builds using Kaniko . There is also a template that use Buildpacks which is very interesting for us, since buildpacks have always had good support for Spring Boot.

Closing

This guide has presented a lot of options for building container images for Spring Boot applications. All of them are completely valid choices, and it is now up to you to decide which one you need. Your first question should be "do I really need to build a container image?" If the answer is "yes" then your choices will likely be driven by efficiency and cacheability, and by separation of concerns. Do you want to insulate developers from needing to know too much about how container images are created? Do you want to make developers responsible for updating images when operating system and middleware vulnerabilities neeed to be patched? Or maybe developers need complete control over the whole process and they have all the tools and knowledge they need.

数码
沪ICP备19006215号-4