hi mmarkus_ hey, I want to chat with you about the client connection and async notification hthing thing sure mmarkus_ haven't thought about how i'll be implementing yet i'm working on error handling at the moment have you seen my email: hotrod client - initial connection not yet mmarkus_, i was coding :) mmarkus_: do you know if a SQL "select ... where ... not in (.... )" is portable? ah right, np. Whenever you have some time. I'm mainly interested in the initial connection step, async notification is something I'll work on later on manik_: I don't know. looking for the 'standard' sql 92 syntax on the web but I can't find it :( hmm ... ok, no worries * lucaz_ (~lucas@190.231.140.207) has joined #infinispan * lucaz has quit (Ping timeout: 265 seconds) * lucaz_ is now known as lucaz galderz: can you access hudson? I keep getting proxy errs mmarkus_, just replied to your email i'm v confused mmarkus_ cos we discussed this before and we agreed on event handling manik_, seems to be working again galderz: mate, excuse my ignorance but I still don't get how the server sends back these events to the cluster mmarkus_, to the cluster or clients? to the clients sorry right, i haven't thought fully thought of the low level details there mmarkus_, originally, we're gonna piggyback that info in the response the email was just about details mmarkus_, before sorting out details, we need to figure out what we're gonna do cos we thought we had agreed on the event modelk but manik_ seems to think otherwise that's why I haven't discussed your questions in detail in the email response I think it's this phase that's causing confusion: "Also, a question about the way in which the server notifies the client on topology changes: how is this going to be performed network-wise? Some of the approaches I see are:" the content of the message is the one described in the hot rod document, the problem raised was juts about how the messages are sent back from cluster to server mmarkus_, the wiki also indicates this is a register/push model mmarkus_, what i'm trying to say is that of your 3 approaches, we though we had agreed NOT to do 3 and now manik_ is saying that 3) is the right one so that's what I'm trying to point out right, that is true, I see that in the email yep, +1 the wiki does not go into the lower level detailss but from thew wiki: "Hot Rod clients can optionally register with the server so that they receive event notifications upon certain circumstances. These notifications can help clients interact with Hot Rod server in a more efficient manner thanks to the extra information received from the server. This information is then sent from the server in an asynchronous manner and so it's not linked to a request/response pattern." mmarkus_ galderz just responded to the email the above clearly indicates that it's not 3) that we're gonna use The confusion was around key change events vs topology change events so we did decide to leave out key change events for the time being, correct? manik_, i thought we'd use one method for everything it'd be rather complex to have a model for keys and a model for cluster formation cahnges right. Shouldn't we describe HOW the cluster will notify the client? that should be part of the protocol IMO well, you need to think about who will use this stuff. e.g., a small percentage of clients will care about key change events but each and every client will care about topology changes (otherwise you end up losing all handles to your backend) (so to some degree it is valid to treat them differently) manik_ in the thread, in the end i do say about using events for cluster formation changes So when you say "events", you refer to a server -> client(s) push-style event similar to key changes? manik_, sure and the same mechanism used for topology changes as well? that's what I thought all along And does the client open a separate "server socket" for servers to connect to? Or is a persistent TCP channel maintained between client and server for 2-way traffic? i was thinking more towards the 2nd one ok, in the latter case it makes sense One of teh emails suggested a client listening on a server socket and that definitely is a no-no since you will have an explosion of connections a server socket in the client would not work if there're firewalls firewalls asoid firewalls aside, it wont scale sure, agree so you need to make sure whatever mechanism you use, clients just have 1 open socket per server it is connected to Now the other question: Client C is connected to Servers S1, S2, S3. And there is a change in cluster formation. Who sends this info back to C? they could all send it since the view id would be the same client could discard the others or the coordinator would only do it manik_: "clients just have 1 open socket per server it is connected to".Why this? If there's a client that sends multiple requests to the same server, wouldn't it make sense to keep a pool of connections, a la jdbc connection pooling? galderz: we might have a client that it's not connected to the coordinator it's/is mmarkus_, true, and the coordinator is a p2p network thing they could all send it back and reject if they already have the hotrod view id hotrod view id could be diff to jgroups view id although could be reused mmarkus_: re: > 1 connection, true. And this can be a configuration option on the client. What I meanyt was, I didn't want a dedicated connection between client and server for events hmm, if all nodes send event data back to clients, this could be a llt of traffic. galderz, or they can only inform the client that view has changed. And it's up to the client to fetch the new view the way it wants * amontenegro_ (~amonteneg@200.111.187.74) has joined #infinispan in a single request * amontenegro_ is now known as aamonte mmarkus_: yes, that makes more sense * aamonte is now known as aamonten but still a lot of unnecessary messages that's the case only in an push (server to client) approach if the client would pull the view on a time basis, then it would only be a single message if u're gonna pull, u might as well piggyback i do agree that push could generate a lot of traffic, particularly in large clusters galderz: yes mmarkus_: the view may be pulled, but the notification that the view has changed is a push right, galderz ? at the moment, both are push although by separating the two, u could reduce the traffic push a view change notification as an event and then let clients get the details and how would the push take place, through a persistent connection? manik_, mmarkus_, why don't we discuss this next week at greater length with a whiteboard? galderz: manik_ +1 mmarkus_, yeah, it'd have to be via persistent connection I think pushing the notification and letting clients pull updated topology makes sense mmarkus_, i'm not sure how pooling of persistent connections and such notification would work for now let all servers broadcast the notification and later as an optimisation we can minimise this, be smarter about it and it would still be compatible. ok manik_ galderz I see your point, I think it might need a dedicated persistent connection mmarkus_, u could check with trustin he's the king for any such questions mmarkus_, let me fwd u an email i sent him when we're discussing this event model u're question about the client side can follow on from this email discussion i'm refering to but I think even that should not be a issue: for each client to keep an extra (but only one) dedicated persisted connection to the cluster.