nitwit005 6 hours ago

> When producing a record to a topic and then using that record for materializing some derived data view on some downstream data store, there’s no way for the producer to know when it will be able to "see" that downstream update. For certain use cases it would be helpful to be able to guarantee that derived data views have been updated when a produce request gets acknowledged, allowing Kafka to act as a log for a true database with strong read-your-own-writes semantics.

Just don't use Kafka.

Write to the downstream datastore directly. Then you know your data is committed and you have a database to query.

  • rakoo 4 hours ago

    Alternatively, your write doesn't have to be fire-and-forget: downstream datastores can also write to kafka (this time fire-and-forget) and the initial client can wait for that event to acknowledge the initial write

  • wvh 5 hours ago

    The problem is that you don't know who's listening. You don't want all possible interested parties to hammer the database. Hence the events in between. Arguably, I'd not use Kafka to store actual data, just to notify in-flight.

    • Spivak 2 hours ago

      For read only queries, hammer away I can scale reads nigh infinitely horizontally. There's no secret sauce that makes it so that only kafka can do this.

  • salomonk_mur 29 minutes ago

    Yeah... Not happening when you have scores of clients running down your database.

    The reason message queue systems exist is scale. Good luck sending a notification at 9am to your 3 million users and keeping your database alive in the sudden influx of activity. You need to queue that load.

  • thom 5 hours ago

    Of course, if you don't have separate downstream and upstream datastores, you don't have anything to do in the first place.

  • menzoic 5 hours ago

    Writing directly to the datastore ignores the need for queuing the writes. How do you solve for that need?

    • immibis 5 hours ago

      Why do you need to queue the writes?

      • vergessenmir 4 hours ago

        some writes might fail, you may need to retry, the data store may be temporarily available etc.

        There may be many things that go wrong and how you handle this depends on your data guarantees and consistency requirements.

        If you're not queuing what are you doing when a write fails, throwing away the data?

        • bushbaba 32 minutes ago

          Some Kafka writes might fail. Hence the Kafka client having a queue with retries.

      • smitty1e 4 hours ago

        A proficient coder can write a program to accomplish a task in the singular.

        In the plural, accomplishing that task in a performant way at enterprise scale seems to involve turning every function call into an asynchronous, queued service of some sort.

        Which then begets additional deployment and monitoring services.

        A queued problem requires a cute solution, bringing acute pain.

vim-guru 6 hours ago

https://nats.io is easier to use than Kafka and already solves several of the points in this post I believe, like removing partitions, supporting key-based streams, and having flexible topic hierarchies.

  • wvh 5 hours ago

    I came here to say just that. Nats solves a lot of those challenges, like different ways to query and preserve messages, hierarchical data, decent authn/authz options for multi-tenancy, much lighter and easier to set up, etc. It has more of a messaging and k/v store feel than the log Kafka is, so while there's some overlap, I don't think they fit the exact same use cases. Nats is fast, but I haven't seen any benchmarks for specifically the bulk write-once append log situation Kafka is usually used for.

    Still, if a hypothetical new Kafka would incorporate some of Nats' features, that would be a good thing.

tyingq 22 minutes ago

He mentions Automq right in the opener. And if I follow the link, they pitch it in a way that sounds very "too good to be true".

Anyone here have some real world experience with it?

Ozzie_osman 7 hours ago

I feel like everyone's journey with Kafka ends up being pretty similar. Initially, you think "oh, an append-only log that can scale, brilliant and simple" then you try it out and realize it is far, far, from being simple.

  • carlmr 6 hours ago

    I'm wondering how much of that is bad developer UX and defaults, and how much of that is inherent complexity in the problem space.

    Like the article outlines, partitions are not that useful for most people. Instead of removing them, how about having them behind a feature flag, i.e. not on by default. That would ease 99% of users problems.

    The next point in the article which to me resonates is the lack of proper schema support. That's just bad UX again, not inherent complexity of the problem space.

    On testing side, why do I need to spin up a Kafka testcontainer, why is there no in-memory kafka server that I can use for simple testing purposes.

    • ahoka 6 hours ago

      I think it's just horrible software built on great ideas sold on a false premise (this is a generic message queue and if you don't use this you cannot "scale").

      • mrkeen 5 hours ago

        It's not just about the scaling, it's about solving the "doing two things" problem.

        If you take action a, then action b, your system will throw 500s fairly regularly between those two steps, leaving your user in an inconsistent state. (a = pay money, b = receive item). Re-ordering the steps will just make it break differently.

        If you stick both actions into a single event ({userid} paid {money} for {item}) then "two things" has just become "one thing" in your system. The user either paid money for item, or didn't. Your warehouse team can read this list of events to figure out which items to ship, and your payments team can read this list of events to figure out users' balances and owed taxes.

        (You could do the one-thing-instead-of-two-things using a DB instead of Kafka, but then you have to invent some kind of pub-sub so that callers know when to check for new events.)

        Also it's silly waiting around to see exceptions build up in your dev logs, or for angry customers to reach out via support tickets. When your implementation depends on publishing literal events of what happened, you can spin up side-cars which verify properties of your system in (soft) real-time. One side-car could just read all the ({userid} paid {money} for {item}) events and ({item} has been shipped) events. It's a few lines of code to match those together and all of a sudden you have a monitor of "Whose items haven't been shipped?". Then you can debug-in-bulk (before the customers get angry and reach out) rather than scour the developer logs for individual userIds to try to piece together what happened.

        Also, read this thread https://news.ycombinator.com/item?id=43776967 from a day ago, and compare this approach to what's going on in there, with audit trails, soft-deletes and updated_at fields.

      • carlmr 5 hours ago

        I kind of agree on the horrible software bit, but what do you use instead? And can you convince your company to use that, too?

        • buster 5 hours ago

          I find that many such systems really just need a scalable messaging system. Use RabbitMQ, Nats, Pub/Sub, ... There are plenty.

          Confluent has rather good marketing and when you need messaging but can also gain a persistent, super scalable data store and more, why not use that instead? The obvious answer is: Because there is no one-size-fits-all-solution with no drawbacks.

  • mrweasel 5 hours ago

    The worst part of Kafka, for me, is managing the cluster. I don't really like the partitioning and the almost hopelessness that ensues when something goes wrong. Recovery is really tricky.

    Granted it doesn't happen often, if you plan correctly, but the possibility of going wrong in the partitioning and replication makes updates and upgrades nightmare fuel.

  • munksbeer 4 hours ago

    I'm not a fan or an anti-fan of kafka, but I do wonder about the hate it gets.

    We use it for streaming tick data, system events, order events, etc, into kdb. We write to kafka and forget. The messages are persisted, and we don't have to worry if kdb has an issue. Out of band consumers read from the topics and persist to kdb.

    In several years of doing this we haven't really had any major issues. It does the job we want. Of course, we use the aws managed service, so that simplifies quite a few things.

    I read all the hate comments and wonder what we're missing.

  • vkazanov 5 hours ago

    Yeah...

    It took 4 years to properly integrate Kafka into our pipelines. Everything, like everything is complicated with it: cluster management, numerous semi-tested configurations, etc.

    My final conclusion with it is that the project just doesn't really know what it wants to be. Instead it tries to provide everything for everybody, and ends up being an unbelievably complicated mess.

    You know, there are systems that know what they want to be (Amazon S3, Postres, etc), and then there are systems that try to eat the world (Kafka, k8s, systemd).

    • 1oooqooq 5 hours ago

      systemd knows very well what it wants to be, they just don't tell anyone.

      it's real goal is to make Linux administration as useless as windows so RH can sell certifications.

      tell me the output of systemctl is not as awful as opening the windows service panel.

      • hbogert 5 hours ago

        Tell me systemctl output isn't more beneficial than per distro bash-mess

        • vkazanov 4 hours ago

          Well, systemd IS useful, the same way Kafka is. I don't want to back to crappy bash for service management, and Kafka is a de fact standard event streaming solution.

          But both are kind of hard to understand end-to-end, especially for an occasional user.

      • chupasaurus 4 hours ago

        There are 2 service panels in Windows since 8 and they are quite different...

  • Hamuko an hour ago

    >Initially, you think "oh, an append-only log that can scale, brilliant and simple"

    Really? I got scared by Kafka by just reading through the documentation.

peanut-walrus 6 hours ago

Object storage for Kafka? Wouldn't this 10x the latency and cost?

I feel like Kafka is a victim of it's own success, it's excellent for what it was designed, but since the design is simple and elegant, people have been using it for all sorts of things for which it was not designed. And well, of course it's not perfect for these use cases.

  • gunnarmorling 3 hours ago

    It can increase latency (which can be somewhat mitigated though by having a write buffer e.g. on EBS volumes), but it substantially _reduces_ cost: all cross-AZ traffic (which is $$$) is handled by the object storage layer, where it doesn't get charged. This architecture has been tremendously popular recently, championed by Warpstream and also available by Confluent (Freight clusters), AutoMQ, BufStream, etc. The KIP mentioned in the post aims at bringing this back into the upstream open-source Kafka project.

  • mrweasel 6 hours ago

    > people have been using it for all sorts of things for which it was not designed

    Kafka is misused for some weird stuff. I've seen it used as a user database, which makes absolutely no sense. I've also seen it used a "key/value" store, which I can't imagine being efficient as you'd have to scan the entire log.

    Part of it seems to stem from "We need somewhere to store X. We already have Kafka, and requesting a database or key/value store is just a bit to much work, so let's stuff it into Kafka".

    I had a client ask for a Kafka cluster, when queried about what they'd need it for we got "We don't know yet". Well that's going to make it a bit hard to dimension and tune it correctly. Everyone else used Kafka, so they wanted to use it too.

  • ako 6 hours ago

    Warpstream already does Kafka with object storage.

    • rad_gruchalski 5 hours ago

      WarpStream has been acquired by Confluent.

  • biorach 4 hours ago

    > the design is simple and elegant

    Kafka is simple and elegant?

vermon 5 hours ago

Interesting, if partitioning is not a useful concept of Kafka, what are some of the better alternatives for controlling consumer concurrency?

frklem 4 hours ago

"Faced with such a marked defensive negative attitude on the part of a biased culture, men who have knowledge of technical objects and appreciate their significance try to justify their judgment by giving to the technical object the only status that today has any stability apart from that granted to aesthetic objects, the status of something sacred. This, of course, gives rise to an intemperate technicism that is nothing other than idolatry of the machine and, through such idolatry, by way of identification, it leads to a technocratic yearning for unconditional power. The desire for power confirms the machine as a way to supremacy and makes of it the modern philtre (love-potion)." Gilbert Simondon, On the mode of existence of technical objects.

This is exactly what I interpret from these kind of articles: engineering just for the cause of engineering. I am not saying we should not investigate on how to improve our engineered artifacts, or that we should not improve them. But I see a generalized lack of reflection on why we should do it, and I think it is related to a detachment from the domains we create software for. The article suggests uses of the technology that come from so different ways of using it, that it looses coherence as a technical item.

  • gunnarmorling 3 hours ago

    For each of the items discussed I explicitly mention why they would be desirable to have. How is this engineering for the sake of engineering?

    • frklem 2 hours ago

      True, for each of the points discussed, there is an explicit mention on why it is desirable. But those are technical solutions, to technical problems. There is nothing wrong with that. The issue is, that the whole article is about technicalities because of technicalities, hence the 'engineering for the cause of engineering' (which is different from '.. for the sake of...'). It is at this point that the 'idea of rebuilding Kafka' becomes a pure technical one, detached from the intention of having something like Kafka. Other commenters in the thread also pointed out to the fact of Kafka not having a clear intention. I agree that a lot of software nowadays suffer from the same problem.

fintler 5 hours ago

Keep an eye out for Northguard. It's the name of LinkedIn's rewrite of Kafka that was announced at a stream processing meetup about a week ago.

olavgg 4 hours ago

How many of the Apache Kafka issues are adressed by switching to Apache Pulsar?

I skipped learning Kafka, and jumped right into Pulsar. It works great for our use case. No complaints. But I wonder why so few use it?

0x445442 an hour ago

How about logging the logs so I can shell into the server to search the messages.

supermatt 4 hours ago

> "Do away with partitions"

> "Key-level streams (... of events)"

When you are leaning on the storage backend for physical partitioning (as per the cloud example, where they would literally partition based on keys), doesnt this effectively just boil down to renaming partitions to keys, and keys to events?

  • gunnarmorling 21 minutes ago

    That's one way to look at this, yes. The difference being that keys actually have a meaning to clients (as providers of ordering and also a failure domain), whereas partitions in their current form don't.

mgaunard 3 hours ago

I can't count the number of bad message queues and buses I've seen in my career.

While it would be useful to just blame Kafka for being bad technology it seems many other people get it wrong, too.

elvircrn 5 hours ago

Surprised there's no mention of Redpanda here.

  • axegon_ 5 hours ago

    Having used both Kafka and Redpanda on several occasions, I'd pick Redpanda any day of the week without a second thought. Easier to setup, easier to maintain, a lot less finicky ans uses a fraction of the resources.

lewdwig 5 hours ago

Ah the siren call of the ground-up rewrite. I didn’t know how deep the assumption of hard disks underpinning everything is baked into its design.

But don’t public cloud providers already all have cloud-native event sourcing? If that’s what you need, just use that instead of Kafka.

Spivak 7 hours ago

Once you start asking to query the log by keys, multi-tenancy trees of topics, synchronous commits-ish, and schemas aren't we just in normal db territory where the kafka log becomes the query log. I think you need to go backwards and be like what is the feature a rdbms/nosql db can't do and go from there. Because the wishlist is looking like CQRS with the front queue being durable but events removed once persisted in the backing db where the clients query events from the db.

The backing db in this wishlist would be something in the vein of Aurora to achieve the storage compute split.

imcritic 5 hours ago

Since we are dreaming - add ETL there as well!

hardwaresofton 7 hours ago

See also: Warpstream, which was so good it got acquired by Confluent.

Feels like there is another squeeze in that idea if someone “just” took all their docs and replicated the feature set. But maybe that’s what S2 is already aiming at.

Wonder how long warpstream docs, marketing materials and useful blogs will stay up.

YetAnotherNick 6 hours ago

I wish there is a global file system with node local disks, which has rule driven affinity to nodes for data. We have two extremes, one like EFS or S3 express which doesn't have any affinity to the processing system, and other what Kafka etc is doing where they have tightly integrated logic for this which makes systems more complicated.

  • XorNot 5 hours ago

    I might be misunderstanding but isn't this like, literally what GlusterFS used to do?

    Like distinctly recall running it at home as a global unioning filesystem where content had to fully fit on the specific device it was targeted at (rather then being striped or whatever).

Mistletoe 5 hours ago

I know it’s not what the article is about but I really wish we could rebuild Franz Kafka and hear what he thought about the tech dystopia we are in.

>I cannot make you understand. I cannot make anyone understand what is happening inside me. I cannot even explain it to myself. -Franz Kafka, The Metamorphosis

  • gherkinnn 4 hours ago

    Both der Proceß and das Schloß apply remarkably well to our times. No extra steps necessary.

  • tannhaeuser 5 hours ago

    They named it like that for a reason ;)

ActorNightly 7 hours ago

[flagged]

  • raducu 6 hours ago

    > Step 1, stop using Java.

    I've seen these comments for over 15 years yet for some "unknown", "silly" reason java keeps being used for really,really useful software like kafka.

    • LunaSea 5 hours ago

      But it's also the reason for why these Apache projects systematically get displaced by better and faster C, C++, Rust or Go alternatives.

      • raducu 5 hours ago

        It would make sense for a highly succesful but stable java project to be replaced like that, but since I'm in the java world, it's usually replaced with another java project.

        I could provide examples myself, but I'm not convinced it's about java vs c++ or go: hadoop, cassandra, zookeeper

        • LunaSea 4 hours ago

          As an outsider, Java looks like a language that can be very fast but it seems like certain idiomatic practices or patterns lead to over-engineered and thus sometimes also slow projects. The Factory<Factory<x>> joke comes to mind.

  • mrweasel 6 hours ago

    Why? It's fast, featureful, well maintained and there are tons of people who know the language?

  • sam_lowry_ 6 hours ago

    Java APIs are indeed awfully complex.

    • ldng 6 hours ago

      More than that. It is a nightmare to make binding for. So if you want to really be extensible through plugins, supporting other languages is very important.

  • butterlettuce 7 hours ago

    Step 2, use Python for everything

    • mirekrusin 6 hours ago

      Good choice, leaves space to rewrite in rust later, right?

      • Spivak 6 hours ago

        The fact that this is so common I think yearns for a language that is basically python and rust smashed together where a project would have some code in the python side and some code in the rust side intermixed fluidly like how you can drop to asm in C. Don't even really try to make the two halves of the language similar.

        An embedded interpreter and JIT in rust basically but jostled around a bit to make it more cohesive and the data interop more fluid— PyO3 but backwards.

        • alpaca128 5 hours ago

          You can already do that with a macro: https://crates.io/crates/inline-python

          But I'm doubtful that it's going to make things simpler if one can't even decide on a language.

          • Spivak 4 hours ago

            It's not really a "decision" thing, more that the pattern of write most of the code in python, rewrite the performance critical bits in rust as a cpython extension would be nice if it were made first class.

        • mirekrusin 4 hours ago

          Maybe something will eventually crystalize out of mojo?

    • Ygg2 6 hours ago

      Step 3, Rewrite it in Rust.

      Step 4, Rewrite it in Zig.

      Step 5, due to security issues rewrite it in Java 63.