-
Epic
-
Resolution: Unresolved
-
Major
-
None
-
AD482 - RHAMQS1.8-en-5-20221213
-
None
-
Change lectures
-
False
-
-
False
-
To Do
-
-
-
en-US (English)
2.7. Sending Data with Producers
The level of information provided by the ProducerRecord is not even touched in the GE. Provide only information on the Topic and Value only.
Lacks information about the configuration you need to configure a regular producer, which is provided by the GE only.
Information about the process is overwhelming as we don't create a Partitioner or a Serializer. Also, We only use one way to produce records different from the GE that we state the Fire-and-forget one
Message acks doesn't make sense to be discussed here as it relates to the last chapter.
Focus section only on Quarkus instead of working on Kafka APIs.
2.9. Receiving Data with Consumers
Again we discuss consumer groups and partition assignments that are not used by the GE.
Commit strategies are discussed in a later chapter. It is too early to describe these concepts.
We use configuration for the clients that were not addressed in the lecture.
Discuss only Quarkus APIs. In later chapters we drop Kafka API, so why bother discussing it?
2.11. Defining Data Formats and Structures
Break down Data Formats without Avro. Use regular JSON (using Jackson) to serialize/deserialize as it does not need Avro right off the bat. Also, SerDes is an important concept in a simple communication.
After that, present Avro and APIcurio and how it is good when you need to deal with multiple microservices that depends on a common
Exchange chapter 4 and 3.
Chapter 4:
Discuss about the differences between Stateless and Stateful Transformation.
Also, provide information about the StreamConfig API as it is only presented in the GE.
Provide a quick overview on Java Streams API in order to allow students to familiarize with the subject used by Kafka.
Disregard the initial discussion about the timeline as it is confusing right now and it is discussed later.
Chapter 3:
DIscuss Kafka with Quarkus APIs only.
Describing Kafka Streams Architecture
Recall students about the idea of partitions and how they are used by tables/streams to divide the processing across multiple streams (repartitioning)
Discuss only streams and streams in one section
Then Stream and tables in another section,
Finally work with tables and tables in another section.
Don't discuss about topologies in the beginning of the chapter. Leave this discussion as part of a quiz instead of mixing it into the lecture as it is.
Move Partitioning Stream process for Scalability to the troubleshooting chapter
Chapter5:
Discuss about existing Kafka connect solutions like Debezium first.
Provide Transformation after using debezium and integrate it to the pipeline.
Finally at the end discuss about how to create custom Kafka connect extensions.