Spring Boot and RESTful API(3)Cassandra

Spring Boot and RESTful API(3)Cassandra

Installation of the Latest 3.10 Version
>wget http://mirror.stjschools.org/public/apache/cassandra/3.10/apache-cassandra-3.10-bin.tar.gz
unzip the file and put it in working directory, add bin to the PATH

Prepare the directory
>sudo mkdir /var/lib/cassandra
>sudo chown -R carl /var/lib/cassandra

Command to start cassandra
>cassandra -Dcassandra.config=file:///opt/cassandra/conf/cassandra.yaml

Client connect to local
>cqlsh localhost 9042

Run the Easy Sample
Prepare the Database
cqlsh>CREATE KEYSPACE mykeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
cqlsh>use mykeyspace;
cqlsh>CREATE TABLE customer (id TimeUUID PRIMARY KEY, firstname text, lastname text);
cqlsh>CREATE INDEX customerfistnameindex ON customer (firstname);
cqlsh>CREATE INDEX customersecondnameindex ON customer (lastname);

cqlsh>CREATE TABLE jobcountdiff(
source_id text,
record_date TIMESTAMP,
new_active_count int,
new_admin_count int,
old_active_count int,
old_admin_count int,
PRIMARY KEY ( source_id, record_date )
) WITH CLUSTERING ORDER BY (record_date DESC);

cqlsh>INSERT INTO jobcountdiff( source_id, record_date, new_active_count, new_admin_count, old_active_count, old_admin_count) VALUES ( '9527', toTimestamp(now()), 1,1,1,1);
cqlsh>INSERT INTO jobcountdiff( source_id, record_date, new_active_count, new_admin_count, old_active_count, old_admin_count) VALUES ( '9527', toTimestamp(now()), 2,2,2,2);
cqlsh>INSERT INTO jobcountdiff( source_id, record_date, new_active_count, new_admin_count, old_active_count, old_admin_count) VALUES ( '9527', toTimestamp(now()), 3,3,3,3);
cqssh>INSERT INTO jobcountdiff( source_id, record_date, new_active_count, new_admin_count, old_active_count, old_admin_count) VALUES ( '9528', toTimestamp(now()), 3,3,3,3);

sqlsh>select * from jobcountdiff where source_id = '9527' order by record_date desc;

cqlsh>DESCRIBE TABLES;

https://academy.datastax.com/resources/getting-started-time-series-data-modeling
sqlsh> select * from jobcountdiff where source_id = '9527' and record_date > '2017-06-01' and record_date < '2017-06-18’;
sqlsh>select * from jobcountdiff where source_id = '9527’;

sqlsh>CREATE TABLE temperature_by_day3 (
date text,
weatherstation_id text,
event_time timestamp,
temperature text,
PRIMARY KEY ((date), weatherstation_id, event_time)
);
sqlsh>INSERT INTO temperature_by_day3(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCD','2013-04-03','2013-04-03 07:01:00','72F’);
sqlsh>INSERT INTO temperature_by_day3(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCD','2013-04-03','2013-04-03 08:01:00','72F');
sqlsh>INSERT INTO temperature_by_day3(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCE','2013-04-03','2013-04-03 07:01:00','72F');

sqlsh>CREATE TABLE temperature_by_day4 ( date text, weatherstation_id text, event_time timestamp, temperature text, PRIMARY KEY ((date), weatherstation_id, event_time) ) WITH CLUSTERING ORDER BY (weatherstation_id DESC, event_time DESC);
sqlsh>INSERT INTO temperature_by_day4(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCE','2013-04-03','2013-04-03 07:01:00','72F’);
sqlsh>INSERT INTO temperature_by_day4(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCE','2013-04-03','2013-04-03 08:01:00','72F');
sqlsh>INSERT INTO temperature_by_day4(weatherstation_id,date,event_time,temperature)
VALUES ('1234ABCD','2013-04-03','2013-04-03 08:01:00','72F');

Query may be needed:
select * from jobcountdiff where source_id = 9527;
select * from jobcountdiff where source_id = '9527' and record_date > '2017-06-15';
select distinct source_id from jobcountdiff;

Set Up Logging Project and Database
>CREATE KEYSPACE jobsmonitor WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
>use jobsmonitor;
>CREATE TABLE jobcounthistory(
source_id text,
record_date TIMESTAMP,
new_active_count int,
new_admin_count int,
old_active_count int,
old_admin_count int,
PRIMARY KEY ( source_id, record_date )
) WITH CLUSTERING ORDER BY (record_date DESC);

Possible query is as follow:
select * from jobcounthistory where source_id = 'asdfasf';
select * from jobcounthistory where source_id = 'asdf' and record_date > '2017-06-11';

>CREATE TABLE jobcountdiff(
date text,
diff int,
record_date TIMESTAMP,
source_id text,
PRIMARY KEY ( date, diff, record_date )
) WITH CLUSTERING ORDER BY (diff ASC, record_date DESC);

Possible Query is as follow:
select * from jobcountdiff where date = '2017-06-15';
select * from jobcountdiff where date = '2017-06-15' and diff > 10;

http://docs.spring.io/spring-data/cassandra/docs/2.0.0.M4/reference/html/

Cassandra Structure Reference from Old Project
CampaignProfile -> camapignID, profileID (MySQL)
Profile -> profileID, brandCode, name, enabled, description, rules  (MySQL)
Rule -> profileID, attributeMetadataID, operator, value  (MySQL)
AttributeMetadata -> id, brandCode, name, attributeType, allowedValues, enabled, dateCreated (MySQL)

Attributes -> brandCode, deviceID, Attributes(Map)
       put -> brandCode, deviceID, unixtime, attributes
val mutator = HFactory.createMutator(ksp, CompositeSerializer.get)
val column = HFactory.createColumn(unixtime, attrib.toBytes)
column.setClock(ksp.createClock)

mutator.insert(rowkey(brandCode, deviceId), columnFamilyName, column)
if (config.getBoolean(env + ".attributes.indexOnPut")) {
  DeviceLookup.update(brandCode, deviceId, attrib, Profile.enabled(brandCode))
}

        get -> brandCode, deviceID, unixtime
val q = HFactory.createSliceQuery(ksp, CompositeSerializer.get, LongSerializer.get, BytesArraySerializer.get)
  .setKey(rowkey(brandCode, deviceId))
  .setColumnFamily(columnFamilyName)
  .setRange(unixtime, 0L, false, 1)

q.execute.get.getColumns.headOption.map(x => com.digby.localpoint.model.Attributes(x.getValue))

        get -> brandCode, deviceID
val q = HFactory.createSliceQuery(ksp, CompositeSerializer.get, LongSerializer.get, BytesArraySerializer.get)
  .setKey(rowkey(brandCode, deviceId))
  .setColumnFamily(columnFamilyName)
  .setRange(null, null, false, 1)

q.execute.get.getColumns.headOption.map(x => com.digby.localpoint.model.Attributes(x.getValue))

DeviceLookup -> brandCode, profileID,
        addDevice(brandCode, profileId, deviceID)
val mutator: Mutator[Composite] = HFactory.createMutator(ksp, CompositeSerializer.get)
val column = HFactory.createColumn(deviceId, "")
column.setClock(ksp.createClock)
mutator.addInsertion(rowkey(brand, profileId), columnFamilyName, column)
mutator.execute()

        addDevices(brandCode, profileID, deviceIDs:List[String])
var cnt = 0val mutator: Mutator[Composite] = HFactory.createMutator(ksp, CompositeSerializer.get)
for (deviceId <- deviceIds) {
  cnt += 1  val column = HFactory.createColumn(deviceId, "")
  column.setClock(ksp.createClock)
  mutator.addInsertion(rowkey(brand, profileId), columnFamilyName, column)
  if ((cnt % batchSize) == 0 || deviceIds.length <= batchSize)
    mutator.execute()
}

def removeDevice(brand: String, profileId: Profile.Id, deviceId: String): Unit = {
  //Remove column  val mutator: Mutator[Composite] = HFactory.createMutator(ksp, CompositeSerializer.get)
  mutator.addDeletion(rowkey(brand, profileId), columnFamilyName, deviceId, StringSerializer.get)
  mutator.execute()
}

def removeDevices(brand: String, profileId: Profile.Id, deviceIds: List[String]): Unit = {
  //Remove devices  var cnt = 0  val mutator: Mutator[Composite] = HFactory.createMutator(ksp, CompositeSerializer.get)
  for (deviceId <- deviceIds) {
    cnt += 1    mutator.addDeletion(rowkey(brand, profileId), columnFamilyName, deviceId, StringSerializer.get)
    if ((cnt % batchSize) == 0 || deviceIds.length <= batchSize)
      mutator.execute()
  }
}

def update(brandCode: String, deviceId: String, attributes: Attributes, profiles: Set[Profile]): Unit = {
  val mutator: Mutator[Composite] = HFactory.createMutator(ksp, CompositeSerializer.get)
  val column = HFactory.createColumn(deviceId, Array[Byte]())
  val clock = ksp.createClock
  column.setClock(clock)
  profiles.foreach { p =>
    if (p(attributes))
      mutator.addInsertion(rowkey(brandCode, p.id.get), columnFamilyName, column)
    else      mutator.addDeletion(rowkey(brandCode, p.id.get), columnFamilyName, deviceId, StringSerializer.get, clock)
  }
  mutator.execute()
}

Take these as example, then
CREATE TABLE attributes(
brandCode text,
deviceID text,
PRIMARY KEY ( brandCode, deviceID )
);

>alter table attributes add t1499356374 varchar;
>insert into attributes (brandcode, deviceid, t1499356373) values ('1','1','good device');

We can have dynamic columns for one key set.

Redshift
Yesterday, I connect to our redshift database to query some data as well.

We used this driver
https://s3.amazonaws.com/redshift-downloads/drivers/RedshiftJDBC41-1.2.1.1001.jar

And we download this open source tool
http://www.sql-workbench.net/downloads.html

Create a new profile from Manage Drivers

References:
https://github.com/spring-projects/spring-boot/tree/v2.0.0.M1/spring-boot-samples/spring-boot-sample-data-cassandra





猜你喜欢

转载自sillycat.iteye.com/blog/2383513