用 Kafka 和 Awaitility 测试 Spring Boot 应用
(给ImportNew加星标,提高Java技能)
编译:ImportNew/覃佑桦
dzone.com/articles/testing-spring-boot-asynchronous-application-using
本文介绍了使用 EmbeddedKafka 和 Awaitility 测试 Spring Boot 应用的不同方法。
测试同步调用程序时,主要工作是“调用并等待”。调用指定 API 或 endpoint,然后等待响应。测试代码会阻塞主线程执行,直到 API 返回响应结果。处理完成后,把得到的响应与预期结果进行比较。
与同步程序或阻塞式调用的程序相比,异步程序的测试方式有所不同,无需阻塞主线程执行。简单地说,异步程序不会等待 API 响应。我们需要手动处理测试代码让执行保持在某个执行节点,并等待所有非阻塞操作的执行结果。在这个阶段可以使用断言。
管理不同的线程与并发问题,编写简洁易读的单元测试并非易事。
可以通过几种方式为 Spring Boot 编写测试:用 Kafka 与 Spring Cloud Stream 的微服务连接。
让我们设计一个简单的例子。
示例
producer bean 向 Kafka topic 发送消息。
package com.techwording.scs;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
@EnableBinding(Source.class)
public class Producer {
private Source mySource;
public Producer(Source mySource) {
super();
this.mySource = mySource;
}
public Source getMysource() {
return mySource;
}
public void setMysource(Source mysource) {
mySource = mysource;
}
}
consumer bean 会监听 Kafka topic 并接收消息。
@EnableBinding(Sink.class)
public class Consumer {
private String receivedMessage;
@StreamListener(target = Sink.INPUT)
public void consume(String message) {
receivedMessage = message;
latch.countDown();
}
public String getReceivedMessage() {
return receivedMessage;
}
}
创建 topic 对应的 Kafka broker。在这个测试中,通过 spring-kafka-test 创建嵌入式 Kafka server。
@ClassRule public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, TOPIC1);
EmbeddedKafkaRule
Spring-kafka-test 提供了一个嵌入式 Kafka broker。可以使用 JUnit @ClassRule 注解创建 Kafka broker。该规则会在测试执行之前在随机的端口上启动 Kafka 与 Zookeeper 服务器,并在测试完成后将其关闭。嵌入式 Kafka broker 无需在运行测试时提供真实的 Kafka 和 Zookeeper 实例。
这里提供了两种实现方式:一种使用 Awaitility,另一种使用 countdown latch。
使用 Awaitility 测试
Awaitility 是一个 DSL 开发库,可以为异步 Java 程序开发 Junit 测试。可以在这里了解其官方 GitHub。下面是使用 Awaitility 实现的测试。
package com.techwording.scs;
import java.util.concurrent.TimeUnit;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.BDDAssertions.then;
import static org.awaitility.Awaitility.waitAtMost;
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { EmbeddedKafkaAwaitilityTest.App.class, EmbeddedKafkaAwaitilityTest.Producer.class, EmbeddedKafkaAwaitilityTest.Consumer.class })
@EnableBinding(Source.class)
public class EmbeddedKafkaAwaitilityTest {
@SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
static class App {
}
private static final String TOPIC1 = "test-topic-1";
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, TOPIC1);
@BeforeClass
public static void setup() {
System.setProperty("spring.cloud.stream.kafka.binder.brokers",
embeddedKafka.getEmbeddedKafka()
.getBrokersAsString());
System.setProperty("spring.cloud.stream.bindings.input.destination", TOPIC1);
System.setProperty("spring.cloud.stream.bindings.input.content-type", "text/plain");
System.setProperty("spring.cloud.stream.bindings.input.group", "input-group-1");
System.setProperty("spring.cloud.stream.bindings.output.destination", TOPIC1);
System.setProperty("spring.cloud.stream.bindings.output.content-type", "text/plain");
System.setProperty("spring.cloud.stream.bindings.output.group", "output-group-1");
}
@Autowired
private Producer producer;
@Autowired
private Consumer consumer;
@Test
public void testMessageSendReceive_Awaitility() {
producer.getMysource()
.output()
.send(MessageBuilder.withPayload("payload")
.setHeader("type", "string")
.build());
waitAtMost(5, TimeUnit.SECONDS)
.untilAsserted(() -> {
then("payload").isEqualTo(
EmbeddedKafkaAwaitilityTest.this.consumer.getReceivedMessage());
});
}
}
使用 CountDownLatch 进行测试
根据 Java 文档的描述,CountDownLatch 可以让多个线程保持等待,直到其它线程中一组操作执行完毕。要用 CountDownLatch 编写测试,首先要用 counter 初始化 latch。
counter 值取决于测试需要等待的任务数。这里把 counter 初值设为1。一旦 producer 消息发送完毕,latch 会等待 count 变为0。consumer 负责减小 count。当 consumer 完成自己的工作,主线程将恢复运行并执行断言。
下面是用 CountDownLatch 实现的测试:
package com.techwording.scs;
import java.util.concurrent.CountDownLatch;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { EmbeddedKafkaLatchTest.App.class, EmbeddedKafkaLatchTest.Producer.class, EmbeddedKafkaLatchTest.Consumer.class })
@EnableBinding(Source.class)
public class EmbeddedKafkaLatchTest {
@SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
static class App {
}
private static final String TOPIC1 = "test-topic-1";
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, TOPIC1);
private static CountDownLatch latch = new CountDownLatch(1);
@BeforeClass
public static void setup() {
System.setProperty("spring.cloud.stream.kafka.binder.brokers",
embeddedKafka.getEmbeddedKafka()
.getBrokersAsString());
System.setProperty("spring.cloud.stream.bindings.input.destination", TOPIC1);
System.setProperty("spring.cloud.stream.bindings.input.content-type", "text/plain");
System.setProperty("spring.cloud.stream.bindings.input.group", "input-group-1");
System.setProperty("spring.cloud.stream.bindings.output.destination", TOPIC1);
System.setProperty("spring.cloud.stream.bindings.output.content-type", "text/plain");
System.setProperty("spring.cloud.stream.bindings.output.group", "output-group-1");
}
@Autowired
private Producer producer;
@Autowired
private Consumer consumer;
@Test
public void testMessageSendReceive() throws InterruptedException {
producer.getMysource()
.output()
.send(MessageBuilder.withPayload("payload")
.setHeader("type", "string")
.build());
latch.await();
assertThat(consumer.getReceivedMessage()).isEqualTo("payload");
}
}
在这里可以下载完整源代码。
github.com/nakulshukla08/techwording/tree/master/spring-cloud-stream-kafka-example
看完本文有收获?请转发分享给更多人
关注「ImportNew」,提升Java技能
好文章,我在看❤️