目录创建etcd数据目录创建docker网络etcd-cluster-compose.yml启动并验证集群启动验证集群k/v操作CURLetcdctl总结
需要安装:
dockerdocker-compose参数详细:
–name:设置成员节点的别名,建议为每个成员节点配置可识别的命名–advertise-client-urls:广播到集群中本成员的监听客户端请求的地址–initial-advertise-peer-urls:广播到集群中本成员的Peer监听通信地址–listen-client-urls:客户端请求的监听地址列表–listen-peer-urls:Peer消息的监听服务地址列表–initial-cluster-token:启动集群的时候指定集群口令,只有相同token的几点才能加入到同一集群–initial-cluster:所有集群节点的地址列表–initial-cluster-state:初始化集群状态,默认为new,也可以指定为exi-string表示要加入到一个已有集群创建etcd数据目录
mkdir -p ./etcd-node{1,2,3}创建docker网络
docker network create –driver bridge –subnet 172.62.0.0/16 –gateway 172.62.0.1 etcd-clusteretcd-cluster-compose.yml
version: 3 networks: etcd-cluster: external: true services: etcd-node1: image: quay.io/coreos/etcd:v3.3.1 container_name: etcd-node1 ports: – “12379:2379” – “12380:2380” restart: always volumes: – ./etcd-node1:/data/app/etcd command: etcd –name etcd-node1 –data-dir /data/app/etcd/ –advertise-client-urls http://172.62.0.10:2379 –initial-advertise-peer-urls http://172.62.0.10:2380 –listen-client-urls http://0.0.0.0:2379 –listen-peer-urls http://0.0.0.0:2380 –initial-cluster-token etcd-cluster –initial-cluster “etcd-node1=http://172.62.0.10:2380,etcd-node2=http://172.62.0.11:2380,etcd-node3=http://172.62.0.12:2380” –initial-cluster-state new networks: etcd-cluster: ipv4_address: 172.62.0.10 etcd-node2: image: quay.io/coreos/etcd:v3.3.1 container_name: etcd-node2 ports: – “22379:2379” – “22380:2380” restart: always volumes: – ./etcd-node2:/data/app/etcd command: etcd –name etcd-node2 –data-dir /data/app/etcd/ –advertise-client-urls http://172.62.0.11:2379 –initial-advertise-peer-urls http://172.62.0.11:2380 –listen-client-urls http://0.0.0.0:2379 –listen-peer-urls http://0.0.0.0:2380 –initial-cluster-token etcd-cluster –initial-cluster “etcd-node1=http://172.62.0.10:2380,etcd-node2=http://172.62.0.11:2380,etcd-node3=http://172.62.0.12:2380” –initial-cluster-state new networks: etcd-cluster: ipv4_address: 172.62.0.11 etcd-node3: image: quay.io/coreos/etcd:v3.3.1 container_name: etcd-node3 ports: – “32379:2379” – “32380:2380” restart: always volumes: – ./etcd-node3:/data/app/etcd command: etcd –name etcd-node3 –data-dir /data/app/etcd/ –advertise-client-urls http://172.62.0.12:2379 –initial-advertise-peer-urls http://172.62.0.12:2380 –listen-client-urls http://0.0.0.0:2379 –listen-peer-urls http://0.0.0.0:2380 –initial-cluster-token etcd-cluster –initial-cluster “etcd-node1=http://172.62.0.10:2380,etcd-node2=http://172.62.0.11:2380,etcd-node3=http://172.62.0.12:2380” –initial-cluster-state new networks: etcd-cluster: ipv4_address: 172.62.0.12启动并验证集群
启动
[root@k8s-node1 etcd-cluster]# docker-compose -f etcd-cluster-compose.yml up -d Pulling etcd-node1 (quay.io/coreos/etcd:v3.3.1)… v3.3.1: Pulling from coreos/etcd ff3a5c916c92: Pull complete dec5fcc85a18: Pull complete 3944f16f0112: Pull complete 0b6d29b049fe: Pull complete d8c39ae91d38: Pull complete 42fcea4864ba: Pull complete Digest: sha256:454e69370d87554dcb4272833b8f07ce1b5d457caa153bda4070b76d89a1cc97 Status: Downloaded newer image for quay.io/coreos/etcd:v3.3.1 Creating etcd-node1 … done Creating etcd-node2 … done Creating etcd-node3 … done [root@k8s-node1 etcd-cluster]# docker-compose -f etcd-cluster-compose.yml ps -a Name Command State Ports —————————————————————————————————— etcd-node1 etcd –name etcd-node1 –d … Up 0.0.0.0:12379->2379/tcp, 0.0.0.0:12380->2380/tcp etcd-node2 etcd –name etcd-node2 –d … Up 0.0.0.0:22379->2379/tcp, 0.0.0.0:22380->2380/tcp etcd-node3 etcd –name etcd-node3 –d … Up 0.0.0.0:32379->2379/tcp, 0.0.0.0:32380->2380/tcp验证集群
通过etcdctl member list命令可以查询出所有集群节点的列表并且结果一致即为成功
[root@k8s-node1 etcd-cluster]# docker exec -it etcd-node1 /bin/sh / # etcdctl member list 8cef47d732d4acff: name=etcd-node1 peerURLs=http://172.62.0.10:2380 clientURLs=http://172.62.0.10:2379 isLeader=false c93af917b643516f: name=etcd-node3 peerURLs=http://172.62.0.12:2380 clientURLs=http://172.62.0.12:2379 isLeader=true cdee7114ad135065: name=etcd-node2 peerURLs=http://172.62.0.11:2380 clientURLs=http://172.62.0.11:2379 isLeader=false / # exit [root@k8s-node1 etcd-cluster]# docker exec -it etcd-node2 /bin/sh / # etcdctl member list 8cef47d732d4acff: name=etcd-node1 peerURLs=http://172.62.0.10:2380 clientURLs=http://172.62.0.10:2379 isLeader=false c93af917b643516f: name=etcd-node3 peerURLs=http://172.62.0.12:2380 clientURLs=http://172.62.0.12:2379 isLeader=true cdee7114ad135065: name=etcd-node2 peerURLs=http://172.62.0.11:2380 clientURLs=http://172.62.0.11:2379 isLeader=false [root@k8s-node1 etcd-cluster]# docker exec -it etcd-node3 /bin/sh / # etcdctl member list 8cef47d732d4acff: name=etcd-node1 peerURLs=http://172.62.0.10:2380 clientURLs=http://172.62.0.10:2379 isLeader=false c93af917b643516f: name=etcd-node3 peerURLs=http://172.62.0.12:2380 clientURLs=http://172.62.0.12:2379 isLeader=true cdee7114ad135065: name=etcd-node2 peerURLs=http://172.62.0.11:2380 clientURLs=http://172.62.0.11:2379 isLeader=falsek/v操作
CURL
新增
[root@k8s-node1 etcd-cluster]# docker-compose -f etcd-cluster-compose.yml ps -a Name Command State Ports —————————————————————————————————— etcd-node1 etcd –name etcd-node1 –d … Up 0.0.0.0:12379->2379/tcp, 0.0.0.0:12380->2380/tcp etcd-node2 etcd –name etcd-node2 –d … Up 0.0.0.0:22379->2379/tcp, 0.0.0.0:22380->2380/tcp etcd-node3 etcd –name etcd-node3 –d … Up 0.0.0.0:32379->2379/tcp, 0.0.0.0:32380->2380/tcp [root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:12379/version {“etcdserver”:”3.3.1″,”etcdcluster”:”3.3.0″}[root@k8s-node1 etcd-cluster]# [root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:12379/v2/keys/Alexclownfish -X PUT -d value=https://blog.alexcld.com {“action”:”set”,”node”:{“key”:”/Alexclownfish”,”value”:”https://blog.alexcld.com”,”modifiedIndex”:19,”createdIndex”:19}}查询
[root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:12379/v2/keys/Alexclownfish -X GET {“action”:”get”,”node”:{“key”:”/Alexclownfish”,”value”:”https://blog.alexcld.com”,”modifiedIndex”:19,”createdIndex”:19}} [root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:22379/v2/keys/Alexclownfish -X GET {“action”:”get”,”node”:{“key”:”/Alexclownfish”,”value”:”https://blog.alexcld.com”,”modifiedIndex”:19,”createdIndex”:19}} [root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:32379/v2/keys/Alexclownfish -X GET {“action”:”get”,”node”:{“key”:”/Alexclownfish”,”value”:”https://blog.alexcld.com”,”modifiedIndex”:19,”createdIndex”:19}}修改
同新建一样
删除
[root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:12379/v2/keys/Alexclownfish -X DELETE {“action”:”delete”,”node”:{“key”:”/Alexclownfish”,”modifiedIndex”:20,”createdIndex”:19},”prevNode”:{“key”:”/Alexclownfish”,”value”:”https://blog.alexcld.com”,”modifiedIndex”:19,”createdIndex”:19}} [root@k8s-node1 etcd-cluster]# curl -L http://127.0.0.1:12379/v2/keys/Alexclownfish -X GET {“errorCode”:100,”message”:”Key not found”,”cause”:”/Alexclownfish”,”index”:20}etcdctl
新增
/ # etcdctl set clownfish 1234567 1234567 / # etcdctl get clownfish 1234567查询
/ # etcdctl get clownfish 1234567修改
/ # etcdctl get clownfish 1234567 / # etcdctl set clownfish 987654321ddd 987654321ddd / # etcdctl get clownfish 987654321ddd删除
/ # etcdctl rm clownfish PrevNode.Value: 987654321ddd / # etcdctl get clownfish Error: 100: Key not found (/clownfish) [23]总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持。
© 版权声明
THE END
暂无评论内容