こんにちは、ナナオです。

今日は家でk8sを構築した際のことを書いていこうと思います。

経緯

最近、よくハードオフ巡り(通称: ハドフ巡り)をしています。

というのも、ジャンクPCが好きなんですよね。

手ごろな価格のPCを見つけては、意味もなくUbuntu載せたりメモリ乗せ換えたりして、動かすようにするのが楽しいんです。

ただ、サーバーは増えてるのですが肝心の何に使うかっていうのが決まってないんですよね。(何のために増やしてんねん!)

ということで、最近家で動かしておきたいサービスも増えたので(これとか)、この機会に家にk8sのノードを構築していこうと思います。

実装

まずは家にサーバーを建てます。

今回はジャンクで買ってきたPCを三台ほど使います。

デスク下に配置したPC。汚くてすみません。。

OSはUbuntu Server24.04を入れて、ssh接続できるところまでは確認済みです。

ここにk8sディストリビューションとして(k3s)[https://k3s.io/]を導入します。

k3sはとにかく軽量なk8sディストリビューションで、また導入コストも低いということでこれにしました。

また、可用性を考慮してコントロールプレーンを複数台構築して、ワーカーノードは後付けするようにしました。

まずは一台目に以下のコマンドを実行します。

手順はこちらのドキュメントを参照します。

curl -sfL https://get.k3s.io | sh -

構築出来たら早速システムのポッドを確認してみましょう。

$ kubectl get pod --namespace kube-system
WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode or --write-kubeconfig-group to modify kube config permissions
error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

あら、権限エラーが出力されました。

ということで、sudoをつけて再度トライすると…

$ sudo kubectl get pod --namespace kube-system
NAME                                      READY   STATUS      RESTARTS   AGE
coredns-7f496c8d7d-cdzf8                  1/1     Running     0          8m32s
helm-install-traefik-5ptvv                0/1     Completed   0          8m31s
helm-install-traefik-crd-rdkbj            0/1     Completed   0          8m32s
local-path-provisioner-578895bd58-6ntrr   1/1     Running     0          8m32s
metrics-server-7b9c9c4b9c-brdnd           1/1     Running     0          8m32s
svclb-traefik-74f79865-phf72              2/2     Running     0          8m28s
traefik-6f5f87584-gqqvf                   1/1     Running     0          8m13s

ということで無事出力されました。

この調子で2台目のノードも構築しちゃいましょう。

…とまぁ2台目の設定をしようとしていたのですが、ドキュメントを見るとなんか違う気がしてなりません。。

追加のエージェントノードをインストールしてクラスターに追加するには、K3S_URLおよびK3S_TOKEN環境変数を使用してインストールスクリプトを実行します。エージェントを参加させる方法の例は次の通りです:

curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -

と書いてある一方で、別のドキュメントには

https://docs.k3s.io/ja/datastore/ha-embedded

まず、クラスタリングを有効にするためにcluster-initフラグと、追加のサーバーをクラスターに参加させるための共有シークレットとして使用されるトークンを使用してサーバーノードを起動します。

curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -s - server \
    --cluster-init \
    --tls-san=<FIXED_IP> # オプション、固定登録アドレスを使用する場合に必要

と、ドキュメントによってノード構築のコマンドが異なることに気づきました。

これ、つまり最初に参照したドキュメントではエージェントノード = ワーカーノードを増やすための手順だった一方で、後述のドキュメントではコントロールプレーンとなるノードを増設する際の手順が書かれていたんですね。

この違いに気づかず、もう一台目を構築してしまいました。。。

うわー後から気づくパターンかーめんどいなーと思っていたら、ドキュメントに以下のような救いの文言が!!

デフォルトの組み込みSQLiteデータベースを使用している既存のクラスターがある場合、K3sサーバーを–cluster-initフラグで再起動するだけでetcdに変換できます。それが完了すると、上記の手順で追加のインスタンスを追加できるようになります。

よかった!!!!

ということで、再起動していきましょう。

雰囲気ですが、おそらくこれでいけるだろう。。ということで以下のコマンドを1台目のサーバーで実行します。

sudo k3s server --cluster-init

出力結果は以下の通りです。

INFO[0000] Starting k3s v1.34.3+k3s1 (48ffa7b6)
INFO[0000] Managed etcd cluster initializing
INFO[0001] Password verified locally for node thinkcentre1
INFO[0001] certificate CN=thinkcentre1 signed by CN=k3s-server-ca@1764503127: notBefore=2025-11-30 11:45:27 +0000 UTC notAfter=2027-01-08 10:31:18 +0000 UTC
INFO[0002] Module overlay was already loaded
INFO[0002] Module nf_conntrack was already loaded
INFO[0002] Module br_netfilter was already loaded
INFO[0002] Module iptable_nat was already loaded
INFO[0002] Module iptable_filter was already loaded
WARN[0002] Failed to load kernel module nft-expr-counter with modprobe
INFO[0002] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[0002] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml
INFO[0002] Creating k3s-cert-monitor event broadcaster
INFO[0002] Starting etcd for new cluster, cluster-reset=false
{"level":"info","ts":"2026-01-08T10:31:19.148320Z","caller":"embed/etcd.go:132","msg":"configuring socket options","reuse-address":true,"reuse-port":true}
{"level":"info","ts":"2026-01-08T10:31:19.148613Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://127.0.0.1:2380","https://[サーバーのIP]:2380"]}
{"level":"info","ts":"2026-01-08T10:31:19.148782Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2026-01-08T10:31:19.149477Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://[サーバーのIP]:2379"]}
{"level":"info","ts":"2026-01-08T10:31:19.150083Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.6","git-sha":"HEAD","go-version":"go1.24.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"thinkcentre1-435a9a3b","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://[サーバーのIP]:2380"],"listen-peer-urls":["https://127.0.0.1:2380","https://[サーバーのIP]:2380"],"advertise-client-urls":["https://[サーバーのIP]:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://[サーバーのIP]:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"thinkcentre1-435a9a3b=https://[サーバーのIP]:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
{"level":"info","ts":"2026-01-08T10:31:19.151081Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/rancher/k3s/server/db/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc0000e16a0}"}
INFO[0002] Connecting to proxy                           url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0002] start
INFO[0002] schedule, now=2026-01-08T10:31:19Z, entry=1, next=2026-01-08T12:00:00Z
INFO[0002] containerd is now running
{"level":"info","ts":"2026-01-08T10:31:19.165287Z","logger":"bbolt","caller":"bbolt@v1.4.3/db.go:321","msg":"Opening bbolt db (/var/lib/rancher/k3s/server/db/etcd/member/snap/db) successfully"}
{"level":"info","ts":"2026-01-08T10:31:19.166134Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"15.138391ms"}
{"level":"info","ts":"2026-01-08T10:31:19.166185Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":0}
{"level":"info","ts":"2026-01-08T10:31:19.166220Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
{"level":"info","ts":"2026-01-08T10:31:19.166290Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
INFO[0002] Polling for API server readiness: GET /readyz failed: the server is currently unable to handle the request
INFO[0002] Handling backend connection request [thinkcentre1]
INFO[0002] Connected to proxy                            url="wss://127.0.0.1:6443/v1-k3s/connect"
INFO[0002] Remotedialer connected to proxy               url="wss://127.0.0.1:6443/v1-k3s/connect"
{"level":"info","ts":"2026-01-08T10:31:19.191863Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
{"level":"info","ts":"2026-01-08T10:31:19.192289Z","caller":"etcdserver/bootstrap.go:499","msg":"starting local member","local-member-id":"1678ac552db73f86","cluster-id":"ec7097221b286baa"}
{"level":"info","ts":"2026-01-08T10:31:19.192618Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
{"level":"info","ts":"2026-01-08T10:31:19.192881Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"1678ac552db73f86 switched to configuration voters=()"}
{"level":"info","ts":"2026-01-08T10:31:19.194751Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"1678ac552db73f86 became follower at term 0"}
{"level":"info","ts":"2026-01-08T10:31:19.194789Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 1678ac552db73f86 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2026-01-08T10:31:19.194812Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"1678ac552db73f86 became follower at term 1"}
{"level":"info","ts":"2026-01-08T10:31:19.194896Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"1678ac552db73f86 switched to configuration voters=(1619233547878875014)"}
{"level":"warn","ts":"2026-01-08T10:31:19.206707Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2026-01-08T10:31:19.212099Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2026-01-08T10:31:19.221456Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2026-01-08T10:31:19.222599Z","caller":"etcdserver/server.go:598","msg":"starting etcd server","local-member-id":"1678ac552db73f86","local-server-version":"3.6.6","cluster-version":"to_be_decided"}
{"level":"info","ts":"2026-01-08T10:31:19.223072Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2026-01-08T10:31:19.223882Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"1678ac552db73f86","initial-advertise-peer-urls":["https://[サーバーのIP]:2380"],"listen-peer-urls":["https://127.0.0.1:2380","https://[サーバーのIP]:2380"],"advertise-client-urls":["https://[サーバーのIP]:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://[サーバーのIP]:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
INFO[0002] Migrating content from sqlite to etcd
{"level":"info","ts":"2026-01-08T10:31:19.224325Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"1678ac552db73f86","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
{"level":"info","ts":"2026-01-08T10:31:19.224399Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-08T10:31:19.232640Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-08T10:31:19.227195Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":"2026-01-08T10:31:19.227226Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"[サーバーのIP]:2380"}
{"level":"info","ts":"2026-01-08T10:31:19.228534Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"1678ac552db73f86 switched to configuration voters=(1619233547878875014)"}
{"level":"info","ts":"2026-01-08T10:31:19.224018Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
INFO[0002] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s
INFO[0002] Configuring database table schema and indexes, this may take a moment...
{"level":"info","ts":"2026-01-08T10:31:19.237098Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/rancher/k3s/server/db/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2026-01-08T10:31:19.237184Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"127.0.0.1:2380"}
{"level":"info","ts":"2026-01-08T10:31:19.237195Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"[サーバーのIP]:2380"}
{"level":"info","ts":"2026-01-08T10:31:19.237332Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"ec7097221b286baa","local-member-id":"1678ac552db73f86","added-peer-id":"1678ac552db73f86","added-peer-peer-urls":["https://[サーバーのIP]:2380"],"added-peer-is-learner":false}
INFO[0002] Database tables and indexes are up to date
INFO[0002] Kine available at unix://kine.sock
INFO[0002] Migrating etcd key /registry/apiextensions.k8s.io/customresourcedefinitions/accesscontrolpolicies.hub.traefik.io
ERRO[0002] Sending HTTP/1.1 503 response to 127.0.0.1:38792: runtime core not ready
INFO[0002] Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=thinkcentre1 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables
INFO[0002] Running kubelet --cloud-provider=external --config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=thinkcentre1 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=[サーバーのIP],2408:210:a8ae:9400:223:24ff:fe91:6d6f --node-labels= --read-only-port=0
I0108 10:31:23.172996   62833 event.go:389] "Event occurred" object="thinkcentre1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="CertificateExpirationOK" message="Node and Certificate Authority certificates managed by k3s are OK"
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0108 10:31:23.182873   62833 server.go:525] "Kubelet version" kubeletVersion="v1.34.3+k3s1"
I0108 10:31:23.185388   62833 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0108 10:31:23.185455   62833 watchdog_linux.go:95] "Systemd watchdog is not enabled"
I0108 10:31:23.185471   62833 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
I0108 10:31:23.188859   62833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
E0108 10:31:23.188371   62833 event.go:359] "Server rejected event (will not retry!)" err="apiserver not ready" event="&Event{ObjectMeta:{thinkcentre1.1888bad643fdc4fa  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:thinkcentre1,UID:thinkcentre1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CertificateExpirationOK,Message:Node and Certificate Authority certificates managed by k3s are OK,Source:EventSource{Component:k3s-cert-monitor,Host:thinkcentre1,},FirstTimestamp:2026-01-08 10:31:23.169391866 +0000 UTC m=+6.109881176,LastTimestamp:2026-01-08 10:31:23.169391866 +0000 UTC m=+6.109881176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k3s-cert-monitor,ReportingInstance:thinkcentre1,}"
E0108 10:31:23.190767   62833 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: apiserver not ready" reflector="k8s.io/client-go@v1.34.3-k3s1/tools/cache/reflector.go:290" type="*v1.Node"
{"level":"info","ts":"2026-01-08T10:31:23.196522Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"1678ac552db73f86 is starting a new election at term 1"}
{"level":"info","ts":"2026-01-08T10:31:23.196601Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"1678ac552db73f86 became pre-candidate at term 1"}
{"level":"info","ts":"2026-01-08T10:31:23.196671Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"1678ac552db73f86 received MsgPreVoteResp from 1678ac552db73f86 at term 1"}
{"level":"info","ts":"2026-01-08T10:31:23.196713Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"1678ac552db73f86 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2026-01-08T10:31:23.196747Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"1678ac552db73f86 became candidate at term 2"}
I0108 10:31:23.198412   62833 server.go:1419] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd"
{"level":"info","ts":"2026-01-08T10:31:23.199383Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"1678ac552db73f86 received MsgVoteResp from 1678ac552db73f86 at term 2"}
{"level":"info","ts":"2026-01-08T10:31:23.200166Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"1678ac552db73f86 has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2026-01-08T10:31:23.200207Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"1678ac552db73f86 became leader at term 2"}
{"level":"info","ts":"2026-01-08T10:31:23.200227Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 1678ac552db73f86 elected leader 1678ac552db73f86 at term 2"}
{"level":"info","ts":"2026-01-08T10:31:23.203105Z","caller":"etcdserver/server.go:2422","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
I0108 10:31:23.205290   62833 server.go:777] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  Defaulting to /"
I0108 10:31:23.205785   62833 server.go:838] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
I0108 10:31:23.205967   62833 swap_util.go:115] "Swap is on" /proc/swaps contents=<
        Filename                                Type            Size            Used            Priority
        /swap.img                               file            4194300         0               -2
 >
I0108 10:31:23.207615   62833 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0108 10:31:23.207809   62833 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"thinkcentre1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"/k3s","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
I0108 10:31:23.208262   62833 topology_manager.go:138] "Creating topology manager with none policy"
I0108 10:31:23.208367   62833 container_manager_linux.go:306] "Creating device plugin manager"
I0108 10:31:23.208490   62833 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager"
I0108 10:31:23.209769   62833 state_mem.go:36] "Initialized new in-memory state store"
I0108 10:31:23.211078   62833 kubelet.go:475] "Attempting to sync node with API server"
I0108 10:31:23.211286   62833 kubelet.go:376] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0108 10:31:23.211315   62833 kubelet.go:387] "Adding apiserver pod source"
I0108 10:31:23.211328   62833 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
{"level":"info","ts":"2026-01-08T10:31:23.206506Z","caller":"etcdserver/server.go:1822","msg":"published local member to cluster through raft","local-member-id":"1678ac552db73f86","local-member-attributes":"{Name:thinkcentre1-435a9a3b ClientURLs:[https://[サーバーのIP]:2379]}","cluster-id":"ec7097221b286baa","publish-timeout":"15s"}
{"level":"info","ts":"2026-01-08T10:31:23.206564Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2026-01-08T10:31:23.206532Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
I0108 10:31:23.218533   62833 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5-k3s1" apiVersion="v1"
{"level":"warn","ts":"2026-01-08T10:31:23.219132Z","caller":"v3rpc/grpc.go:109","msg":"etcdserver: failed to register grpc metrics","error":"descriptor Desc{fqName: \"grpc_server_msg_sent_total\", help: \"Total number of gRPC stream messages sent by the server.\", constLabels: {}, variableLabels: {grpc_type,grpc_service,grpc_method}} already exists with the same fully-qualified name and const label values"}
I0108 10:31:23.219186   62833 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
I0108 10:31:23.219241   62833 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
{"level":"info","ts":"2026-01-08T10:31:23.219581Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2026-01-08T10:31:23.206541Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2026-01-08T10:31:23.209144Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"ec7097221b286baa","local-member-id":"1678ac552db73f86","cluster-version":"3.6"}
{"level":"info","ts":"2026-01-08T10:31:23.221022Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
I0108 10:31:23.222559   62833 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
I0108 10:31:23.222657   62833 server_v1.go:49] "podresources" method="list" useActivePods=true
{"level":"info","ts":"2026-01-08T10:31:23.224847Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc","address":"127.0.0.1:2379"}
I0108 10:31:23.226995   62833 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
I0108 10:31:23.220712   62833 server.go:1257] "Started kubelet"
{"level":"info","ts":"2026-01-08T10:31:23.227811Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc","address":"[サーバーのIP]:2379"}
E0108 10:31:23.236959   62833 server.go:911] "Failed to start healthz server" err="listen tcp 127.0.0.1:10248: bind: address already in use"
{"level":"info","ts":"2026-01-08T10:31:23.242169Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
{"level":"info","ts":"2026-01-08T10:31:23.242232Z","caller":"etcdserver/server.go:2442","msg":"cluster version is updated","cluster-version":"3.6"}
{"level":"info","ts":"2026-01-08T10:31:23.242494Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
{"level":"info","ts":"2026-01-08T10:31:23.242557Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
{"level":"info","ts":"2026-01-08T10:31:23.245308Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"http","address":"127.0.0.1:2382"}
I0108 10:31:23.263667   62833 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
I0108 10:31:23.288105   62833 server.go:310] "Adding debug handlers to kubelet server"
E0108 10:31:23.288782   62833 server.go:199] "Failed to listen and serve" err="listen tcp 0.0.0.0:10250: bind: address already in use

address already in useとかerrorとか不穏なワードがいろいろ出てるんですが、、

ほんとにこれでいいのか不安になったので、Geminiに聞いてみたところ、この状態は「新旧二つのコントロールプレーンが立ち上がっている状態」とのことでした。

だからaddress already in useと出ていたのね。。

ということで修正します。

一度動作中のk3sを止めます。

sudo systemctl stop k3s

以下のコマンドで設定されているトークンを確認します。

sudo cat /var/lib/rancher/k3s/server/node-token

k3sコマンドよりも設定ファイルで管理した方がいい、とのことだったので設定ファイルに書き込んでいきます。

sudo mkdir -p /etc/rancher/k3s
sudo nano /etc/rancher/k3s/config.yaml

設定ファイルでcluster-initをtrueにする設定と、先ほど確認したトークンを設定します。

cluster-init: true
token: "先ほど確認したトークン"

サービスを再起動し、PCを再起動しても起動するようにしておきます。

sudo systemctl start k3s
sudo systemctl enable k3s

ノードを確認しましょう。

$ sudo kubectl get node
NAME           STATUS   ROLES                AGE     VERSION
thinkcentre1   Ready    control-plane,etcd   2m54s   v1.34.3+k3s1

コントロールプレーンとして実行されています!

ということで、一台目のセットアップができました。

続けて二台目のセットアップをしていきましょう。

二台目のPCで以下のコマンドを実行します。

curl -sfL https://get.k3s.io | K3S_TOKEN=<先ほどのトークン> sh -s - server \
    --server https://<サーバー1のIPまたはホスト名>:6443

ところがここでも事故発生。

改行を挟んでしまったせいで–serverオプションが正しく読み取られないまま実行されてしまいました。

その影響かnode取得しても結果がとんでもないことに。。

$ sudo kubectl get node
E0108 11:10:28.319574    7959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:6443/api?timeout=32s\": dial tcp 127.0.0.1:6443: connect: connection refused"
E0108 11:10:28.320527    7959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:6443/api?timeout=32s\": dial tcp 127.0.0.1:6443: connect: connection refused"
E0108 11:10:28.322535    7959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:6443/api?timeout=32s\": dial tcp 127.0.0.1:6443: connect: connection refused"
E0108 11:10:28.323565    7959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:6443/api?timeout=32s\": dial tcp 127.0.0.1:6443: connect: connection refused"
E0108 11:10:28.324386    7959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:6443/api?timeout=32s\": dial tcp 127.0.0.1:6443: connect: connection refused"
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Gemini助けて~と送ったら、とりあえずアンインストールしたら?と返ってきました。

ということでいったんアンインストールします。

/usr/local/bin/k3s-uninstall.sh

そして再度コマンドを打ち直します。今度は間違えないように。。

が、エラーになりました。

[INFO]  systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xeu k3s.service" for details.

どうやら完全にアンインストールされていなかったようです。

再度アンインストール作業をやり直します。

一度中途半端に動いているk3sを停止します。

sudo systemctl stop k3s.service

残っているデータを削除します。

sudo rm -rf /var/lib/rancher/k3s
sudo rm -f /etc/systemd/system/k3s.service.env
sudo rm -f /etc/systemd/system/k3s.service

再度インストールしましたが、失敗しました。

エラーメッセージも先ほどと変わらず。。

ということで再度Geminiを頼ります。

ワイ「ダメだったンゴ」

Gemini「そもそも疎通取れてるんか?」

Gemini「ファイアーウォールの設定あったら疎通取れんで」

ワイ「なるほど」

ということで疎通確認をします。

$ nc -zv [サーバーのIP] 6443
nc: connect to [サーバーのIP] port 6443 (tcp) failed: Connection timed out

タイムアウトしました。

やはり疎通が取れていなかったようです。

ということでファイアーウォールの設定を修正します。

{{ url_title “https://docs.k3s.io/ja/installation/requirements?_highlight=ufw&os=debian#inbound-rules-for-k3s-nodes" }}

公式ドキュメントによるとファイアーウォールはオフにするのがおすすめだそうです。

ということでオフにしておきます。

sudo ufw disable

再度インストールコマンドを実行したところ、成功しました!

$ sudo kubectl get node
NAME                  STATUS   ROLES                AGE   VERSION
nanaonuc6caysserver   Ready    control-plane,etcd   11m   v1.34.3+k3s1
thinkcentre1          Ready    control-plane,etcd   83m   v1.34.3+k3s1

ということで三台目もセットアップします。

curl -sfL https://get.k3s.io | K3S_TOKEN=<先ほどのトークン> sh -s - server \
    --server https://<サーバー1のIPまたはホスト名>:6443

今度はすんなり成功しました。

$ sudo kubectl get node
NAME                  STATUS   ROLES                AGE   VERSION
mouse1                Ready    control-plane,etcd   6s    v1.34.3+k3s1
nanaonuc6caysserver   Ready    control-plane,etcd   15m   v1.34.3+k3s1
thinkcentre1          Ready    control-plane,etcd   86m   v1.34.3+k3s1

ということで、三台のサーバーでノードを構築することができました。

ワーカーノードの追加はまたサーバーとなるPCを持ってきたらやろうと思います。

続けて、いつも作業するPCに構築したクラスタを認識させましょう。

一台目のPCから設定を出力します。

sudo cat /etc/rancher/k3s/k3s.yaml

出力した結果をいつも作業するPCのKUBECONFIGファイルにマージするのですが、この際にサーバーのアドレスを変更します。

- cluster:
    server: https://127.0.0.1:6443 # <- ここの値を一台目のサーバーIPアドレスになるようにする

それ以外はそのままでいいです。

私は手動でマージしました。(それ以外にkubeconfigをマージするいい方法があるならぜひコメントで教えてください。)

これで普段使うPCからも構築したクラスタを認識させることができました!

感想

結構大変でしたが、無事構築できてよかったです。

早くワーカーノードも追加したい。。うずうず。。

あと、kubeconfigに書いたサーバーアドレスが一台目のサーバーIPアドレスになっているので、もし一台目が落ちた場合に接続が切れてしまうようになっているので、のちのち直したいですね。

ということで次回以降でこのノードを使ってデプロイを進めていこうと思います!!