Jump to content

Streambase LiveStream appplications Recovering Data to the Server using Peer-based Recovery


João Dessain Saraiva

Recommended Posts

Hello Tibco community, hope you are doing good and save.

 

I am trying to inplement theLiveView Data Recovery,Peer-based Recovery based on the documentation available at:

https://docs.streambase.com/latest/index.jsptopic=%2Fcom.streambase.sb....

 

I am using two projects based on the sample Hello LiveView, that generate and stream data to a Live View server ( localhost:10080 and localhost:10081 )

My goal is that if one of the nodes goes downthe othey keeps the data syncronized, such as:

LV 1: 123456.......... (...)

LV 2: ............78910 (...)

 

However what I am gettin something like this:

LV 1: 123456.......... (...)

LV 2 .............12345 (...)

My configuration is the same on both projects, at the engine.conf I added:

(...)

ldm = {

metadataStore = {

storeType = "TRANSACTIONAL_MEMORY"

}

tableGroup = "myTableGroup"

}

(...)

 

And I also created the file ExternalServerConnection:

(...)

 

lv://localhost:10080

lv://localhost:10081

 

(...)

And at TableSpace.lvconf:

(...)

 

 

 

 

(...)

 

I believe I am missing some point or concept and would like to have your feedback to sort this out.

Best regards and thanks in advance,

Link to comment
Share on other sites

Hi,I'm not sure what issue you are having -- it's not clear to me what your stream diagrams (copied below) are supposed to mean -- could you please explain further what you mean by them

 

LV 1: 123456.......... (...)LV 2: ............78910 (...)---------------------------------------LV 1: 123456.......... (...)LV 2 .............12345 (...)

 

Link to comment
Share on other sites

Hello, my goal is that the second project continues with the current status of the first. 

 

The diagram represents a data flow from 1 to 10.

On the first example the data continues from process LV1 to LV2, however on the second there's a break and the data flow restarts. 

 

I would like for both projects to be syncronized. 

Thanks 

Link to comment
Share on other sites

OK, I think I understand what you mean there.Another question: why do you have an ExternalServerConnection lvconf file in this architecture I only see the two peer nodes in your screenshot, and I think having an external server connection from one LiveView engine to another is about tiering rather than peer-based recovery (though I might be mistaken).For example, in the TIBCO Streaming product sample called HA Tables (sample_lv_sample_ha_tables), there are 3 nodes: a front end node (services layer only) and two back end nodes (peered services+data layer nodes). The two back end nodes actually contain the replicas of the table, and the front end node routes queries to the back end nodes. In that pattern, the front end node has the external server connection.

 

But I don't see a front end node in your screen shot, so I'm not sure why you are showing us the external server connection configuration.

Link to comment
Share on other sites

A High Availability (HA) table implementation requires the use of a message bus to publish data to all tables in the HA group. The actual message bus used is entirely your choice and is implemented in a Publisher evenflow application. Not all message buses provide the same capabilitiesand for manymessage buses recoveringLV data requires another source such as a peer, or a Log file that has been configured. Look a bit further down in the user doc you site for "Publisher Recovery Protocol" to learn about the interaction that needs to be implemented in the Publisher.

Rather than starting with the Hello LiveView sample and building your own HA, you might be better off staring with one of the HA samples that have all this layed out. If you don't have a bus preference, staring with the Kafka HA sample is probably the easiest.

You don't strictly need a service layer (SL) fronting XHA table configured data layers (DLs), but it has many benefits to do so. The SL can cache queries for clients, load ballance queriesbetween available DLs, and elastic scaling is supported. Failing over from shutdown (or crashed) DLs to another DL is automatic and clients don't see a service interuption.

Link to comment
Share on other sites

Hi Joao,

you wrote

>my goal is that the second project continues with the current status of the first.

and in your first post you wrote:

>My goal is that if one of the nodes goes downthe othey keeps the data syncronized, such as:

 

> LV 1: 123456.......... (...)

 

> LV 2: ............78910 (...)

First, I strongly suggest you take a look at the shipped sample for LV HA.

Imagine there are 3 nodes already runnning. All 3 are in sync. They all have the same data (the same tables defined, and subscribed to the same bus/topic to get that data). Then another node joins. This 4th node will stand up using the nodes already running. This is peer based recovery. Node 4 will get ALL the data that exists and then subscribe to get new data from the source (the bus).

 

From your description, it sounds like you want the new node to only get the new data and continue forward getting all the new data only.

In fact, the new node will get the new data, but has to get all the existing data first, so that all the nodes are in sync; all the nodes will have the same data.

Hope this helps

Lorinda

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...