Geode Installation and Administration Guide

Geode Installation and Administration Guide

Step1 Install Geode

  1. Download .zipor .tarfile from http://geode.apache.org .

  2. Unzip .zipor .tarfile, this path_to_productis the absolute path.

    For example .zipformat :

    $ unzip apache-geode-1.1.0.zip -d path_to_product
    

    For example .tarformat :

    $ tar -xvf apache-geode-1.1.0.tar -C path_to_product
    
  3. Set JAVA_HOMEenvironment variables. Under Linux/Unix:

    JAVA_HOME=/usr/java/jdk1.8.0_60
    export JAVA_HOME
    

    On Windows platform:

    set JAVA_HOME=c:\Program Files\Java\jdk1.8.0_60 
    
  4. Add the Geodescript to your PATHenvironment variables. Under Linux/Unix:

    PATH=$PATH:$JAVA_HOME/bin:path_to_product/bin
    export PATH
    

    Windowsunder :

    set PATH=%PATH%;%JAVA_HOME%\bin;path_to_product\bin 
    
  5. To verify that the installation was successful, type gfsh versionthe command and see the output. For example:

    $ gfsh version
    v1.1.0
    

    To view more detailed information, such as build date and version number, JDK version, etc., type:

    $ gfsh version --full
    

Step2 Start Locator

  1. Create a working directory eg, my_geode) and enter it. gfshSave the working directory and log files of the locator and server in this directory.

  2. Type gfshcommand to start it (or gfsh.batunder Windows OS).

        _________________________     __
       / _____/ ______/ ______/ /____/ /
      / /  __/ /___  /_____  / _____  /
     / /__/ / ____/  _____/ / /    / /
    /______/_/      /______/_/    /_/    1.5
    
    Monitor and Manage Geode
    gfsh>
    
  3. gfshAfter entering , type the start locatorcommand to specify the name of the locator :

    gfsh>start locator --name=locator1
    Starting a Geode Locator in /home/username/my_geode/locator1...
    .................................
    Locator in /home/username/my_geode/locator1 on ubuntu.local[10334] as locator1 is currently online.
    Process ID: 3529
    Uptime: 18 seconds
    Geode Version: 1.5
    Java Version: 1.8.0_121
    Log File: /home/username/my_geode/locator1/locator1.log
    JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false
    -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true
    -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
    Class-Path: /home/username/Apache_Geode_Linux/lib/geode-core-1.0.0.jar:
    /home/username/Apache_Geode_Linux/lib/geode-dependencies.jar
    
    Successfully connected to: JMX Manager [host=10.118.33.169, port=1099]
    
    Cluster configuration service is up and running.
    

If you run start locatorfrom gfshwithout specifying a member name, gfsh will automatically generate a random member name. This is useful for automation.

Step3 Start Pulse

Launches a browser-based pulse monitoring tool. Pulse is a web application that provides a graphical dashboard for monitoring vital real-time health and performance of Geode clusters, members, and regions. See Geode Pulse

gfsh>start pulse

This command starts Pulse and automatically connects to Locatorthe JMX Manager running in . On the Pulse login page, enter the default username adminand password admin.

The Pulse application shows the locator (locator1) just started:img

Step4 Start the server (server)

A Geode server acts as a long-running, configurable member cluster (also known as a distributed system). The Geode server is primarily used to host long-term data regions and to run standard Geode processes, such as in a client/server configuration Server. See Running Geode Server Processes .

Start a cache server:

gfsh>start server -name=server1 -server-port=40411

This command starts a cache server named "server1" on 40411the specified port of .

If you run the start servercommand gfshwithout specifying a member name, gfsha random member name is automatically generated. This is useful for automation.

Watch for changes in Pulse (new member [member] and server [server]). Try expanding the distributed system icon to see the locator and cache servers graphically.

Step5 Create a replicated, persistent region

In this step, gfsha region is created using the command. A region is the core building block of a Geode cluster and provides a way to organize data. The region created in this exercise replicates data among cluster members and leverages persistence Save data to disk (persistent). See Data Regions

  1. Create a replicated persistent region:
gfsh>create region --name=regionA --type=REPLICATE_PERSISTENT
Member  | Status
------- | --------------------------------------
server1 | Region "/regionA" created on "server1"

Note that this region is hosted by server1.

  1. Use the gfshcommand to view the list of regions on the cluster
gfsh>list regions
List of regions
---------------
regionA
  1. View the list of members of the cluster. Started locators and cache servers are in the list.
gfsh>list members
  Name       | Id
------------ | ---------------------------------------
Coordinator: | 192.0.2.0(locator1:3529:locator)<ec><v0>:59926
locator1     | 192.0.2.0(locator1:3529:locator)<ec><v0>:59926
server1      | 192.0.2.0(server1:3883)<v1>:65390
  1. View the details of the region:
gfsh>describe region --name=regionA
..........................................................
Name            : regionA
Data Policy     : persistent replicate
Hosting Members : server1

Non-Default Attributes Shared By Hosting Members

 Type  | Name | Value
------ | ---- | -----
Region | size | 0
  1. In Pulse, click the green cluster icon to see the new members and new regions just added.

Note : Keep this gfshprompt open for later steps.

Step6 Operate the data in the region to show persistence

Apache Geode manages data in key/value pairs. In most applications, Java programs add, delete, and modify stored data. Data can also be added and retrieved using gfsh commands. See Data Commands .

  1. Run putthe command to add data to the region.
gfsh>put --region=regionA --key="1" --value="one"
Result      : true
Key Class   : java.lang.String
Key         : 1
Value Class : java.lang.String
Old Value   : <NULL>

gfsh>put --region=regionA --key="2" --value="two"
Result      : true
Key Class   : java.lang.String
Key         : 2
Value Class : java.lang.String
Old Value   : <NULL>
  1. Query the data in the region.
gfsh>query --query="select * from /regionA"

Result     : true
startCount : 0
endCount   : 20
Rows       : 2

Result
------
two
one

Notice that the result is putthe two data entries you just added using the command. See Data Entries .

  1. Shut down the cache server:
gfsh>stop server --name=server1
Stopping Cache Server running in /home/username/my_geode/server1 on ubuntu.local[40411] as server1...
Process ID: 3883
Log File: /home/username/my_geode/server1/server1.log
....
  1. Restart the cache server:
gfsh>start server --name=server1 --server-port=40411
  1. Query the data in the region again and notice that the data is still available:
gfsh>query --query="select * from /regionA"

Result     : true
startCount : 0
endCount   : 20
Rows       : 2

Result
------
two
one

Because regionA is persistent, it writes a copy of the data to disk. When the server hosting regionA starts, the data will be populated into the cache. Note that the results show putthe values ​​of the two data entries added with the command before stopping the server . See Data EntriesSee Data Regions

Step7 Detect the impact of replication

In this step, start a second cache server. Because regionA is replicable, this data will be available on whichever server hosts the region.

  1. Start the second server:
gfsh>start server --name=server2 --server-port=40412
  1. Run describe regionthe command to view the details of regionA:
gfsh>describe region --name=regionA
..........................................................
Name            : regionA
Data Policy     : persistent replicate
Hosting Members : server1
                  server2

Non-Default Attributes Shared By Hosting Members

 Type  | Name | Value
------ | ---- | -----
Region | size | 2

Note that there is no need to create regionA for server2 anymore. The output of the command shows that regionA is hosted on server1 and server2. When gfsh starts the server, it requests the configuration from the cluster configuration service, which distributes the shared configuration to the cluster-joined any new server.

  1. Add a third data entry:
gfsh>put --region=regionA --key="3" --value="three"
Result      : true
Key Class   : java.lang.String
Key         : 3
Value Class : java.lang.String
Old Value   : <NULL>
  1. Open the Pulse application (in a web browser) and observe the cluster topology. You should see a locator with two servers connected. Click the "data" tab to see information about regionA.
  2. Shut down the first cache server:
gfsh>stop server --name=server1
Stopping Cache Server running in /home/username/my_geode/server1 on ubuntu.local[40411] as server1...
Process ID: 4064
Log File: /home/username/my_geode/server1/server1.log
....
  1. Query data in the rest of the cache server:
gfsh>query --query="select * from /regionA"

Result     : true
startCount : 0
endCount   : 20
Rows       : 3

Result
------
two
one
three

Notice that the data contains 3 entries, including the one you just added.

  1. Add a fourth data entry
gfsh>put --region=regionA --key="4" --value="four"
Result      : true
Key Class   : java.lang.String
Key         : 3
Value Class : java.lang.String
Old Value   : <NULL>

Only server2 is running. All data is still available because the data is replicated and persisted. But new data entries are currently only available on server2.

gfsh>describe region --name=regionA
..........................................................
Name            : regionA
Data Policy     : persistent replicate
Hosting Members : server2

Non-Default Attributes Shared By Hosting Members

 Type  | Name | Value
------ | ---- | -----
Region | size | 4
  1. shut down Server2
gfsh>stop server --name=server2
Stopping Cache Server running in /home/username/my_geode/server2 on ubuntu.local[40412] as server2...
Process ID: 4185
Log File: /home/username/my_geode/server2/server2.log
.....

Step8 Restart the cache server in parallel

In this step, the cache servers are restarted in parallel. Since the data is persistent, the data is available when the server restarts. Since the data is replicated on multiple servers, the servers must be started in parallel so that their data can be synchronized before starting .

  1. Start server1. Because regionA is replicated and persistent, it needs to synchronize the data of other servers to start, so it needs to wait for other servers to start:
gfsh>start server --name=server1 --server-port=40411
Starting a Geode Server in /home/username/my_geode/server1...
............................................................................
............................................................................

If you look in the server1.log log file at this point for information about restarting the server, you will see log messages similar to the following:

[info 2015/01/14 09:08:13.610 PST server1 <main> tid=0x1] Region /regionA has pot
entially stale data. It is waiting for another member to recover the latest data.
  My persistent id:

    DiskStore ID: 8e2d99a9-4725-47e6-800d-28a26e1d59b1
    Name: server1
    Location: /192.0.2.0:/home/username/my_geode/server1/.

  Members with potentially new data:
  [
    DiskStore ID: 2e91b003-8954-43f9-8ba9-3c5b0cdd4dfa
    Name: server2
    Location: /192.0.2.0:/home/username/my_geode/server2/.
  ]
  Use the "gfsh show missing-disk-stores" command to see all disk stores that
are being waited on by other members.
  1. Open a second terminal window, change to a suitable working directory (eg: my_geode), and start gfsh:
[username@localhost ~/my_geode]$ gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  /
 / /__/ / ____/  _____/ / /    / /
/______/_/      /______/_/    /_/    1.5

Monitor and Manage Geode
  1. Connect to the cluster:
gfsh>connect --locator=localhost[10334]
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=ubuntu.local, port=1099] ..
Successfully connected to: [host=ubuntu.local, port=1099]
  1. Start server2:
gfsh>start server --name=server2 --server-port=40412

When server2 starts, notice that server1 finishes starting in the first gfsh window:

Server in /home/username/my_geode/server1 on ubuntu.local[40411] as server1 is currently online.
Process ID: 3402
Uptime: 1 minute 46 seconds
Geode Version: 1.5
Java Version: 1.8.0_121
Log File: /home/username/my_geode/server1/server1.log
JVM Arguments: -Dgemfire.default.locators=192.0.2.0[10334] -Dgemfire.use-cluster-configuration=true
-XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
-Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /home/username/Apache_Geode_Linux/lib/geode-core-1.0.0.jar:
/home/username/Apache_Geode_Linux/lib/geode-dependencies.jar
  1. Verify that the locator and both servers are running:
gfsh>list members
  Name       | Id
------------ | ---------------------------------------
Coordinator: | ubuntu(locator1:2813:locator)<ec><v0>:46644
locator1     | ubuntu(locator1:2813:locator)<ec><v0>:46644
server2      | ubuntu(server2:3992)<v8>:21507
server1      | ubuntu(server1:3402)<v7>:36532
  1. Query to verify that putall data added with the command is available:
gfsh>query --query="select * from /regionA"

Result     : true
startCount : 0
endCount   : 20
Rows       : 5

Result
------
one
two
four
Three

NEXT_STEP_NAME : END
  1. shut down server2
gfsh>stop server --dir=server2
Stopping Cache Server running in /home/username/my_geode/server2 on 192.0.2.0[40412] as server2...
Process ID: 3992
Log File: /home/username/my_geode/server2/server2.log
....
  1. Query again to verify that all data added with the put command is available:
gfsh>query --query="select * from /regionA"

Result     : true
startCount : 0
endCount   : 20
Rows       : 5

Result
------
one
two
four
Three

NEXT_STEP_NAME : END

Step 9 Shut down the entire system including the positioner

There are the following steps to shut down the cluster:

  1. In the current gfsh session, shut down the cluster:
gfsh>shutdown --include-locators=true

See shutdown .

  1. When prompted, type "Y" to confirm the cluster shutdown.
As a lot of data in memory will be lost, including possibly events in queues,
do you really want to shutdown the entire distributed system? (Y/n): Y
Shutdown is triggered

gfsh>
No longer connected to ubuntu.local[1099].
gfsh>
  1. exitType exit gfsh shell.

appendix:

Handling Missing Disk Stores

1. Show Missing Disk Stores

Using gfsh, the show missing-disk-stores command lists all disk stores with most recent data that are being waited on by other members.

Example:

Missing Disk Stores

           Disk Store ID             |      Host      | Directory
------------------------------------ | -------------- | -------------------------
9eb7bf36-330b-4c08-995d-a66f745f0fd6 | /192.168.68.21 | /opt/geode_work/server4/.
4a83ec7f-d80c-460e-a315-01948bd4e396 | /192.168.68.20 | /opt/geode_work/server3/.
9eb7bf36-330b-4c08-995d-a66f745f0fd6 | /192.168.68.21 | /opt/geode_work/server4/.
9eb7bf36-330b-4c08-995d-a66f745f0fd6 | /192.168.68.21 | /opt/geode_work/server4/.



No missing colocated region found

Note: You need to be connected to Locator in gfsh to run this command.

2. Revoke Missing Disk Stores

This section applies to disk stores for which both of the following are true:

  • Disk stores that have the most recent copy of data for one or more regions or region buckets.
  • Disk stores that are unrecoverable, such as when you have deleted them, or their files are corrupted or on a disk that has had a catastrophic failure.

When you cannot bring the latest persisted copy online, use the revoke command to tell the other members to stop waiting for it. Once the store is revoked, the system finds the remaining most recent copy of data and uses that.

Note: Once revoked, a disk store cannot be reintroduced into the system.

Use gfsh show missing-disk-stores to properly identify the disk store you need to revoke. The revoke command takes the disk store ID as input, as listed by that command.

Example:

gfsh>revoke missing-disk-store --id=60399215-532b-406f-b81f-9b5bd8d1b55a
Missing disk store successfully revoked

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324396222&siteId=291194637