N1 盒子刷入 OpenWrt 并部署 K3s

K3s

去年的时候听了 Rancher 的直播分享讲 k3s 在边缘计算中的应用实践,就感觉很好玩儿,当时对 K3s 还是很感兴趣,无奈当时手头没有合适的开发板可玩,就没愿意花钱再买块树莓派吃灰 😂。直到上周末的时候一次机会得以重新认识了一下 K3s 所以就想找个设备来玩玩 K3s。于是东挑西选就选定了斐讯 N1 ,优点多多,不仅可以做路由器还能跑 docker 、还能做 NAS 等等,两个字真香 😂。之前我一直使用 R6300V2 刷了梅林以及另一台 WNDR3700V4 刷了 OpenWrt 做透明代理,在 ESXi 上也装了个 OpenWrt 软路由给内网的虚拟机做透明代理,但还是心有不甘想找一台性能更好一点的设备来做旁路网关。综上就种草买了早已经被大家玩烂了的 N1 盒子

开箱

外包装

接口

开机画面

电视系统

配置

拆机

CPU

  • Amlogic S905,ARM Cortex-A53,四核 2GHz,GPU 是 ARM Mali™-450,支持 4K@60fpsmailto:4K@60fps 硬件解码,HDMI 2.0。
  • 另外支持 AES 加密解密,所以在上面跑的 SS/SSR 的加密算法适合选择 AES 喽。
╭─root@OpenWrt ~
╰─# cat /proc/cpuinfo
processor       : 0
model name      : ARMv8 Processor rev 4 (v8l)
BogoMIPS        : 48.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 1
model name      : ARMv8 Processor rev 4 (v8l)
BogoMIPS        : 48.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 2
model name      : ARMv8 Processor rev 4 (v8l)
BogoMIPS        : 48.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

processor       : 3
model name      : ARMv8 Processor rev 4 (v8l)
BogoMIPS        : 48.00
Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant     : 0x0
CPU part        : 0xd03
CPU revision    : 4

RAM

  • 内存芯片 RAM:K4B4G1646E ,DDR3-1866MHz,内存容量 512MB;前后一共有 4 颗,总内存 2GB。
╭─root@OpenWrt ~
╰─# cat /proc/meminfo
MemTotal:        1851688 kB
MemFree:         1278128 kB
MemAvailable:    1639036 kB
╭─root@OpenWrt ~
╰─# dmesg | grep Memory
[    0.000000] Memory: 924744K/1911808K available (12926K kernel code, 1108K rwdata, 5116K rodata, 640K init, 748K bss, 69560K reserved, 917504K cma-reserved)

ROM

  • KLM8G1GEME,8GB eMMC。
╭─root@OpenWrt ~
╰─# fdisk -l /dev/mmcblk1
Disk /dev/mmcblk1: 7.3 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7db7c786

Device         Boot   Start      End  Sectors  Size Id Type
/dev/mmcblk1p1      1433600  1695743   262144  128M  c W95 FAT32 (LBA)
/dev/mmcblk1p2      1695744  2744319  1048576  512M 83 Linux
/dev/mmcblk1p3      2744320 15269887 12525568    6G 83 Linux
╭─root@OpenWrt ~
╰─# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
mmcblk1      179:0    0  7.3G  0 disk
├─mmcblk1p1  179:1    0  128M  0 part /boot
├─mmcblk1p2  179:2    0  512M  0 part /
└─mmcblk1p3  179:3    0    6G  0 part /mnt/mmcblk1p3
mmcblk1boot0 179:32   0    4M  1 disk
mmcblk1boot1 179:64   0    4M  1 disk

net

  • 千兆 1Gbps/Full 网卡以及 2.4GHz/5GHz 的无线网卡
  • 看到 RTL8211F 我哭了,竟然是螃蟹卡,掀桌儿 😡
  • WIFI 芯片:屏蔽罩是焊死的,所以不清楚。只知道是双频 1x1mimo 用支持 5G ac,2.4G 连接速率 65Mbps,5G 连接速率 390Mbps。为什么不是 72m 和 433m,因为 No SGI。

(N1 不支持 RTL8153,是因为硬件供电的原因,不是驱动的原因,也有个别网友说能支持的,可能因为 rtl8153 也有不同版本,但支 持 AX88179,速率 200m 左右, 因此从性能上考虑,N1 不建议外接任何网卡,直接用单网卡做旁路由也能达到 750m 左右)

╭─root@OpenWrt ~
╰─# dmesg | grep net
[    0.000000] Kernel command line: root=UUID=69fd696a-85a4-4ec8-b604-4cefd053cbc1 rootfstype=btrfs rootflags=compress=zstd console=ttyAML0,115200n8 console=tty0 no_console_suspend consoleblank=0 fsck.fix=yes fsck.repair=yes net.ifnames=0 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory swapaccount=1
[    0.691325] audit: initializing netlink subsys (disabled)
[    5.131043] meson8b-dwmac c9410000.ethernet: IRQ eth_wake_irq not found
[    5.136694] meson8b-dwmac c9410000.ethernet: IRQ eth_lpi not found
[    5.142854] meson8b-dwmac c9410000.ethernet: PTP uses main clock
[    5.148731] meson8b-dwmac c9410000.ethernet: no reset control found
[    5.155366] meson8b-dwmac c9410000.ethernet: User ID: 0x11, Synopsys ID: 0x37
[    5.162029] meson8b-dwmac c9410000.ethernet:         DWMAC1000
[    5.167192] meson8b-dwmac c9410000.ethernet: DMA HW capability register supported
[    5.174594] meson8b-dwmac c9410000.ethernet: RX Checksum Offload Engine supported
[    5.182017] meson8b-dwmac c9410000.ethernet: COE Type 2
[    5.187192] meson8b-dwmac c9410000.ethernet: TX Checksum insertion supported
[    5.194171] meson8b-dwmac c9410000.ethernet: Wake-Up On Lan supported
[    5.200586] meson8b-dwmac c9410000.ethernet: Normal descriptors
[    5.206426] meson8b-dwmac c9410000.ethernet: Ring mode enabled
[    5.212198] meson8b-dwmac c9410000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[    5.428271] Initializing XFRM netlink socket
[    5.521946] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[   11.076508] meson8b-dwmac c9410000.ethernet eth0: PHY [0.2009087f:00] driver [RTL8211F Gigabit Ethernet]
[   11.083720] meson8b-dwmac c9410000.ethernet eth0: No Safety Features support found
[   11.088006] meson8b-dwmac c9410000.ethernet eth0: PTP not supported by HW
[   11.094588] meson8b-dwmac c9410000.ethernet eth0: configuring for phy/rgmii link mode
[   19.250095] meson8b-dwmac c9410000.ethernet eth0: PHY [0.2009087f:00] driver [RTL8211F Gigabit Ethernet]
[   19.265579] meson8b-dwmac c9410000.ethernet eth0: No Safety Features support found
[   19.267519] meson8b-dwmac c9410000.ethernet eth0: PTP not supported by HW
[   19.274244] meson8b-dwmac c9410000.ethernet eth0: configuring for phy/rgmii link mode
[   19.411257] netlink: 4 bytes leftover after parsing attributes in process `iw'.
[   19.595853] ieee80211 phy0: brcmf_net_attach: couldn't register the net device
[   19.597447] ieee80211 phy0: brcmf_ap_add_vif: Registering netdevice failed
[   21.019546] meson8b-dwmac c9410000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 1047.556475] meson8b-dwmac c9410000.ethernet eth0: Link is Down
[ 1047.772626] meson8b-dwmac c9410000.ethernet eth0: PHY [0.2009087f:00] driver [RTL8211F Gigabit Ethernet]
[ 1047.785569] meson8b-dwmac c9410000.ethernet eth0: No Safety Features support found
[ 1047.787510] meson8b-dwmac c9410000.ethernet eth0: PTP not supported by HW
[ 1047.794242] meson8b-dwmac c9410000.ethernet eth0: configuring for phy/rgmii link mode
[ 1051.459018] meson8b-dwmac c9410000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 1241.984013] netlink: 4 bytes leftover after parsing attributes in process `iw'.
[ 1346.112897] meson8b-dwmac c9410000.ethernet eth0: Link is Down
[ 1346.295582] meson8b-dwmac c9410000.ethernet eth0: PHY [0.2009087f:00] driver [RTL8211F Gigabit Ethernet]
[ 1346.312202] meson8b-dwmac c9410000.ethernet eth0: No Safety Features support found
[ 1346.314172] meson8b-dwmac c9410000.ethernet eth0: PTP not supported by HW
[ 1346.320888] meson8b-dwmac c9410000.ethernet eth0: configuring for phy/rgmii link mode
[ 1349.769672] meson8b-dwmac c9410000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx

htop

吼吼吼,居然还能跑 Docker ,那 K3s 也肯定没问题啦(

准备

首先准备个刷机过程中需要的清单 别看你今天闹得欢,小心今后拉清单

Openwrt_U 盘直刷包:链接:https://pan.baidu.com/s/1K0bNItsY1-Br4o1EsRokkg 提取码:lidf

推荐资源

刷机过程中的资源来自以下,可以去这里找更多你想要的,他们的教程也会比我更优秀一些,而且 B 站上也有视频,视频讲解的要更好。写这篇博客也只是为了记录一下 😂

刷机

开启 ADB

将 HDMI 线连接到盒子的 HDMI 口,将 USB 公对公的数据线分别插在盒子的靠近 HDMI 口的 USB ,另一头插在 PC 的 USB 口,再将鼠标连接到盒子的另一个 USB 口。插好上述的线缆之后插上电源开机,由于盒子没有电源开关所以只能通过插拔电源的方式开关机。开机之后会自动进入到电视盒子系统的页面,然后挪动鼠标,在 固件版本 那里狂点鼠标四下开启 ADB 调试模式。屏幕中间会弹出个灰色的方框提示 打开 adb

PS:提到 adb 咱就不由自主地想起了去年世界五百强他家的号称要取代 Android 、自主研发、微内核、面向确定时延、分布式、形式化验证、多场景;大名鼎鼎的哄懵(PPT)操作系统竟然也有 ADB (偷笑

在这里需要注意的是,我使用网线连接盒子和路由器后,屏幕上显示的 IP 并不是我本地内网的 IP ,这一点很奇怪,而使用无线连接到我的无线路由器后去能获取到正常的内网 IP ,所以如果你使用网线连接时也遇到这样的情况,不妨尝试一下使用无线连接。

PC 上连接 ADB 进入 fastboot 模式

上述要将盒子的 HMDI 接入显示器并插上鼠标,就是为了在 ADB 模式下进入 fastboot 模式,而 fastboot 模式和我们普通的 Android 刷机时的 fastboot 一样,何况这个盒子也是基于 Android 的,所以嘛,和 Android 刷机大同小异。然后下载好 Android SDK 平台工具 Platform-tools 并解压到合适的位置,按住 shift 键之后鼠标右键点击 在此处打开 Powershell 窗口 ,接着在该命令行下输入 adb.exe connect IP ,这里的 IP 即为屏幕上显示的 IP ,连接成功之后会有提示已经连接。然后再运行 adb.exe shell reboot fastboot 进入 fastboot 模式。这一步和我们给 Android 手机刷机时俺电源键 + 音量键,进入 fastboot 模式一样的道理,只不过盒子没有音量键和开机键,所以要使用 adb 的方式进入到 fastboot 。

PS D:\Desktop> adb.exe connect 192.168.0.105
connected to 192.168.0.105:5555
PS D:\Desktop\N1> adb.exe shell reboot fastboot

刷入降级镜像

盒子重启进入 fastboot 模式之后,PC 上会提示插入新设备,并叮咚响一下,之后会自动安装上驱动程序。

右键计算机–> 管理–> 设备管理器 LeMobile Android Device 下的 Android ADB Interface 。如果没有出现该设备的话,可以下载个驱动安装程序来给你装上该设备的驱动。使用 fastboot devices -l 命令来查看该设备是否正常连接上。

PS D:\Desktop\N1> fastboot devices -l
1234567890             fastboot

校验镜像

Mode                LastWriteTime         Length Name
----          -------------   ------ ----
-a----   2020/3/7     17:53       14191104 boot.img
-a----   2020/3/7     17:53         672256 bootloader.img
-a----   2020/3/7     17:53            605 hash.txt
-a----   2020/3/7     17:53       18295296 recovery.img
# 刷机之前要先校验镜像是否完整或者被篡改,养成好习惯😋
PS D:\Desktop\N1> cat .\hash.txt
文件: N1_V2.19_imgs\bootloader.img
大小: 672256 字节
修改时间: 2018年5月25日 星期五, 23:09:08
MD5: 80BD2EFED2F76B6ECA56F7E026549E1A
SHA1: 3A1FFCADF062748CA1D00EB80E73F2175B160A0D
CRC32: 34BA154A

文件: N1_V2.19_imgs\recovery.img
大小: 18295296 字节
修改时间: 2018年5月25日 星期五, 23:09:19
MD5: CAC6ED1DED5BB1D9CFAD39B2B1C6CD8A
SHA1: B468A3134B376A5295C1FD5857343128D0AC056C
CRC32: AA11C424

文件: N1_V2.19_imgs\boot.img
大小: 14191104 字节
修改时间: 2018年5月25日 星期五, 23:09:46
MD5: 75DA954D0C4CBCD4A86CEE501B40C5AA
SHA1: 1A0D04DB8FB57F252C72C909A3268B6B2C3BD241
CRC32: 547D7823

# Windows CMD 下可使用 certUtil 来计算文件的 HASH 值
PS D:\Desktop\N1> certUtil -hashfile .\recovery.img
SHA1 hash of .\recovery.img:
b468a3134b376a5295c1fd5857343128d0ac056c
CertUtil: -hashfile command completed successfully.
PS D:\Desktop\N1> certUtil -hashfile .\boot.img
SHA1 hash of .\boot.img:
1a0d04db8fb57f252c72c909a3268b6b2c3bd241
CertUtil: -hashfile command completed successfully.
PS D:\Desktop\N1> certUtil -hashfile .\bootloader.img
SHA1 hash of .\bootloader.img:
3a1ffcadf062748ca1d00eb80e73f2175b160a0d
CertUtil: -hashfile command completed successfully.

输入降级包镜像

# 然后根据对应的分区刷入对应的镜像,这点不要搞混,不然会变砖的
PS D:\Desktop\N1> fastboot flash bootloader bootloader.img
Sending 'bootloader' (656 KB)                      OKAY [  0.037s]
Writing 'bootloader'                               OKAY [  0.075s]
Finished. Total time: 0.118s
PS D:\Desktop\N1> fastboot flash boot boot.img
Sending 'boot' (13858 KB)                          OKAY [  0.620s]
Writing 'boot'                                     OKAY [  0.666s]
Finished. Total time: 1.290s
PS D:\Desktop\N1> fastboot flash recovery recovery.img
Sending 'recovery' (17866 KB)                      OKAY [  0.802s]
Writing 'recovery'                                 OKAY [  0.869s]
Finished. Total time: 1.684s
PS D:\Desktop\N1>

将 OpenWrt 刷入 EMMC

降级成功之后我们就可以开机进入线刷模式,从而可以选择从 U 盘设备启动,之所以降级也是因为这,高版本的固件屏蔽了这一功能。我们无法直接将 OpenWrt 的镜像写入到盒子的 EMMC 存储中,要进入到一个叫 Armbian 的嵌入式系统中,在 Armbian 系统里将 OpenWrt 镜像写入到 EMMC 存储设备中。而也有一些大佬将二者的镜像结合在一起,也就是可以使用该镜像在 U 盘模式下将 OpenWrt 镜像写入到 EMMC 中,并且准备好了自动写入的脚本,比较方便。该镜像可以从 29+ 版 N1_OP_U 盘直刷包,及贝壳云_OP_线刷包,内核 5.4 找到。

写入 U 盘镜像

我写入 U 盘的镜像是 N1_Openwrt_R20.2.15_k5.4.23-amlogic-flippy-28+.img 也就是 N1-Openwrt_U盘直刷包 ,写入工具使用 RufusRoadkils DiskImage 都可以,就和我们平时制作启动盘一样。

从 update 模式启动 Armbian

要拔掉 USB 公对公的线,盒子开机后使用 bat 脚本进入线刷模式,在 update 模式下才可以从我们刚刚写入的镜像 U 盘中启动。如果你下载好 Google Drive 上的资源的话,工具在 /玩法0--各种玩法必备工具/2---进线刷模式工具---启动U盘系统前要先进线刷模式/进线刷模式工具 下的

@echo off
echo 本工具通过adb连接使N1重启进入线刷模式!
echo 请先用usb双公头线连接盒子和电脑!
echo made by webpad
set /p ip=请输入盒子的内网IP地址:
adb kill-server
if "%ip%" == "" echo 提示:请输入正确的IP地址 && goto end
echo 开始通过网络进行ADB连接……
adb connect %ip%
adb devices -l | findstr "p230"
if %ERRORLEVEL% NEQ 0 echo 连接测试失败!请确保已开启远程调试!&&goto end
echo *
echo *
echo *
echo 盒子已重启进入线刷模式,若windows发现了新设备,请在设备管理器中手动安装驱动,此窗口可以关闭...
adb shell reboot update
del adbshell.txt >nul 2>nul

:end
echo 按任意键退出...
pause > nul

执行完该脚本后提示成功进入线刷模式时就马上插刷好镜像的 U 盘 不要错过时机(。・∀・)ノ

其实这个工具也就是一条 adb shell reboot update 指令而已,abd connect IP 连接到盒子之后,然后再执行 adb shell reboot update ,然后再插入刚刚写好镜像的 U 盘。插上 HMDI 线连接显示器就会看到控制终端输出着启动流程的画面,上面还有四只小企鹅,这就说明我们成功进入了 Armbian 系统

将 OpenWrt 写入到 EMMC 存储中

cd /root
./inst-to-emmc.sh

该脚本成功执行后,重启拔掉 U 盘就可以直接进入到 OpenWrt 系统中了:)

OpenWrt 上的一些小优化

使用 ifconfig eth0 192.168.0.211 netmask 255.255.255.0 修改 eth0 的 IP ,但依旧无法 ping 通 192.168.0.1 网关,还需要修改一下 LAN 口的配置。

修改 IP

由于 LAN 口默认的是 192.168.1.1 ,如果你的网络不是在该网段下的话就连不上该设备,所以如果不是采用直连网线的话,而是将盒子连接在路由器,这时就需要手动修改一下默认的配置才能连接到盒子。

config interface 'lan'
        option type 'bridge'
        option ifname 'eth0'
        option proto 'static'
        option netmask '255.255.255.0'
        option dns '119.29.29.29'
        option gateway '192.168.0.1'
        option delegate '0'
        option ipaddr '192.168.0.211'

修改一下 lan 网口的 IP 为你内网可访问到的 IP 即可,这样我们就可以使用该 IP 访问 OpenWrt 系统了

安装必备工具

# 首先修改一下 opkg 源,默认给修改为中科大的代理镜像站了,这点不错
╭─root@OpenWrt /mnt
╰─# opkg update
Downloading https://openwrt.proxy.ustclug.org/snapshots/targets/armvirt/64/packages/Packages.gz
Updated list of available packages in /var/opkg-lists/openwrt_core
Downloading
╭─root@OpenWrt /mnt
╰─# opkg install git zsh git-http ca-bundle ca-certificates wget curl
Package git (2.25.1-1) installed in root is up to date.
Package zsh (5.7.1-1) installed in root is up to date.
Package git-http (2.25.1-1) installed in root is up to date.
Package ca-bundle (20190110-2) installed in root is up to date.
Package ca-certificates (20190110-2) installed in root is up to date.
Upgrading wget on root from 1.20.3-2 to 1.20.3-3...
Downloading https://openwrt.proxy.ustclug.org/snapshots/packages/aarch64_generic/packages/wget_1.20.3-3_aarch64_generic.ipk
Multiple packages (librt and librt) providing same name marked HOLD or PREFER. Using latest.
Upgrading curl on root from 7.66.0-1 to 7.68.0-1...
Downloading https://openwrt.proxy.ustclug.org/snapshots/packages/aarch64_generic/base/curl_7.68.0-1_aarch64_generic.ipk
Configuring curl.
Configuring wget.

zsh 踩坑

安装完 oh-my-zsh 之后,不会帮你修改 /etc/passwd 中的配置,需要手动修改,然后我还是按照往常的修改方法:

root:x:0:0:root:/root:/bin/zsh

第二天我开开心心地 ssh 登录时,当场翻车了,它会提示 Permission denied, please try again. ,咦我密码就是这个呀。

debug1: Next authentication method: password
[email protected]'s password:
debug1: Authentications that can continue: publickey,password,keyboard-interactive
Permission denied, please try again.
[email protected]'s password:

然后我通过 OpenWrt 的 web 管理页面能正常登录,在 web 管理中的 TTYD 终端 中尝试登录,然后就会提示你 connection closed ,看来找到问题的原因了,用户登录时无法执行它的 shell 所致。比较好的解决办法就是在 web 管理页面 系统 –> 备份/升级 那里备份配置文件 –> 把备份下载下来 –> 然后解包后修改配置文件再打包 直接使用 vim 打开 tar 包修改 /etc/passwd –> 再上传恢复配置文件即可。曲线救国 つ﹏⊂ 不过在此需要注意一下备份文件的目录结构,解包前目录结构是什么样,重新打包之后的目录结构也是什么样。

OpenWrt login: root
Password:
login: can't execute '/bin/zsh': No such file or directory

在 Debian 和 Ubuntu 发行版中 zsh 会默认安装在 /bin/zsh 下,并创建一个 /usr/bin/zsh 的一个软链接。

╭─debian@debian /mnt/d/Desktop
╰─$ ls -alh /bin/zsh
-rwxr-xr-x 1 root root 842K Feb  5  2019 /bin/zsh
╭─debian@debian /mnt/d/Desktop
╰─$ ls -alh /usr/bin/zsh
lrwxrwxrwx 1 root root 8 Aug 31  2019 /usr/bin/zsh -> /bin/zsh

而在 OpenWrt 中却默认安装在了 /usr/bin/zsh 下,也没有为 /bin/zsh 创建一个软链接,也尝试过使用 scp 的方式将 /usr/bin/zsh 复制到 /bin/zsh ,没想到 scp 也需要用户的 shell 程序来执行 😐

# OpenWrt 下 zsh 的安装路径为 /usr/bin/zsh 而不是传统发相版中的 /bin/zsh
╭─root@OpenWrt ~
╰─# where zsh
/usr/bin/zsh

# 尝试使用 scp 复制一份 zsh 到 /bin/zsh ,不得行!😂
╭─debian@debian ~
╰─$ scp [email protected]:/usr/bin/zsh [email protected]:/bin/zsh
[email protected]'s password:
Permission denied, please try again.

部署一下 K3s

这里只是简单的部署试用一下,后期我会详细讲一下关于 K3s 的内容。

官方脚本

根据官方的脚本一句命令行 curl -sfL https://get.k3s.io | sh - 就能跑起 K3s 出来,比起隔壁的 K8s 要简单很多很多。但是脚本一执行就翻车了 😂

╭─root@OpenWrt ~
╰─# curl -sfL https://get.k3s.io | sh -
[ERROR]  Can not find systemd or openrc to use as a process supervisor for k3s

由于 OpenWrt 没有 systemd 和 OpenRC 来管理进程,所以无法通过脚本的方式一键部署,只能二进制部署啦。因为 systemd 对于 OpenWrt 来讲过于复杂臃肿,对于几十 MB 甚至几 MB 存储空间的路由器来说实在是装不下。所以 OpenWrt 使用的 init 系统是 procd

Whereas desktop distributions use glib+dbus+udev(part of systemd), OpenWrt uses libubox+ubus+hotplug2. This provides some pretty awesome functionality without requiring huge libraries with huge dependencies (cough glib).

二进制部署

在 K3s 的 release 页面下载到 k3s-arm64 二进制文件。我放在了 /opt/bin/ 目录下,并在 .zshrc 追加 export PATH=/opt/bin:$PATH 环境变量。

usage

╭─root@OpenWrt ~
╰─# k3s
NAME:
   k3s - Kubernetes, but small and simple

USAGE:
   k3s [global options] command [command options] [arguments...]

VERSION:
   v1.17.3+k3s1 (5b17a175)

COMMANDS:
   server        Run management server
   agent         Run node agent
   kubectl       Run kubectl
   crictl        Run crictl
   ctr           Run ctr
   check-config  Run config check
   help, h       Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug        Turn on debug logs [$K3S_DEBUG]
   --help, -h     show help
   --version, -v  print the version

check-config

启动一个 K3s 集群之前线 check config 一下下 😋

╭─root@OpenWrt ~
╰─# k3s check-config
INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/a61d93bc56bb3dd34b5ca93517164f5b503e16b6c7414e87b11cf336eeb8ebd7

Verifying binaries in /var/lib/rancher/k3s/data/a61d93bc56bb3dd34b5ca93517164f5b503e16b6c7414e87b11cf336eeb8ebd7/bin:
- sha256sum: good
- links: good

System:
- /usr/sbin iptables v1.8.3 (legacy): ok
- swap: disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

modprobe: module configs not found in modules.dep
info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: missing (fail)
- CONFIG_IP_NF_FILTER: enabled
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_IP_NF_NAT: enabled
- CONFIG_NF_NAT: enabled
- CONFIG_NF_NAT_NEEDED: missing (fail)
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled
      - CONFIG_CRYPTO_GCM: enabled
      - CONFIG_CRYPTO_SEQIV: enabled
      - CONFIG_CRYPTO_GHASH: enabled
      - CONFIG_XFRM: enabled
      - CONFIG_XFRM_USER: enabled
      - CONFIG_XFRM_ALGO: enabled
      - CONFIG_INET_ESP: enabled
      - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled

STATUS: 2 (fail)

发现 ONFIG_NF_NAT_IPV4: missing (fail)CONFIG_NF_NAT_NEEDED: missing (fail) 失败了 🙃,先不管看看能不能跑起来再说。

创建 K3s 集群

根据 K3s 的文档,使用 k3s server 选项就能创建一个 K3s 集群

╭─root@OpenWrt ~
╰─# k3s server
INFO[2020-03-08T20:17:00.103499172+08:00] Starting k3s v1.17.3+k3s1 (5b17a175)
INFO[2020-03-08T20:17:00.112443368+08:00] Kine listening on unix://kine.sock
INFO[2020-03-08T20:17:01.572347551+08:00] Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.0.212:192.168.0.212 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:089b0c6a0b78e5f9d3a33b154e97185f644ed693bee80d4559c47e00f19af2f8]
INFO[2020-03-08T20:17:01.591244044+08:00] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.
I0308 20:17:01.593861    9319 server.go:622] external host was not specified, using 192.168.0.212
I0308 20:17:01.594884    9319 server.go:163] Version: v1.17.3+k3s1
I0308 20:17:04.485872    9319 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0308 20:17:04.485954    9319 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0308 20:17:04.491004    9319 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0308 20:17:04.491084    9319 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0308 20:17:04.592193    9319 master.go:267] Using reconciler: lease
I0308 20:17:04.699310    9319 rest.go:115] the default service ipfamily for this cluster is: IPv4
W0308 20:17:06.161374    9319 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0308 20:17:06.212135    9319 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0308 20:17:06.264606    9319 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0308 20:17:06.361403    9319 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0308 20:17:06.379432    9319 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0308 20:17:06.447661    9319 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0308 20:17:06.550084    9319 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0308 20:17:06.550168    9319 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0308 20:17:06.599929    9319 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0308 20:17:06.600017    9319 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0308 20:17:16.394308    9319 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0308 20:17:16.394308    9319 dynamic_cafile_content.go:166] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0308 20:17:16.395251    9319 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0308 20:17:16.397910    9319 secure_serving.go:178] Serving securely on 127.0.0.1:6444
I0308 20:17:16.398017    9319 tlsconfig.go:219] Starting DynamicServingCertificateController
I0308 20:17:16.398379    9319 autoregister_controller.go:140] Starting autoregister controller
I0308 20:17:16.398459    9319 cache.go:32] Waiting for caches to sync for autoregister controller
I0308 20:17:16.398723    9319 available_controller.go:386] Starting AvailableConditionController
I0308 20:17:16.398794    9319 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0308 20:17:16.399089    9319 crdregistration_controller.go:111] Starting crd-autoregister controller
I0308 20:17:16.399288    9319 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0308 20:17:16.399546    9319 crd_finalizer.go:263] Starting CRDFinalizer
I0308 20:17:16.399790    9319 controller.go:85] Starting OpenAPI controller
I0308 20:17:16.400002    9319 customresource_discovery_controller.go:208] Starting DiscoveryController
I0308 20:17:16.400341    9319 naming_controller.go:288] Starting NamingConditionController
I0308 20:17:16.400638    9319 establishing_controller.go:73] Starting EstablishingController
I0308 20:17:16.400854    9319 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0308 20:17:16.401201    9319 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0308 20:17:16.402017    9319 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0308 20:17:16.402094    9319 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0308 20:17:16.403447    9319 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0308 20:17:16.403537    9319 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0308 20:17:16.403681    9319 controller.go:81] Starting OpenAPI AggregationController
I0308 20:17:16.418176    9319 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0308 20:17:16.419045    9319 dynamic_cafile_content.go:166] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0308 20:17:16.606683    9319 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0308 20:17:16.613199    9319 cache.go:39] Caches are synced for autoregister controller
I0308 20:17:16.614373    9319 cache.go:39] Caches are synced for AvailableConditionController controller
I0308 20:17:16.614504    9319 shared_informer.go:204] Caches are synced for crd-autoregister
I0308 20:17:16.616912    9319 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0308 20:17:16.633217    9319 controller.go:150] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
E0308 20:17:16.639707    9319 controller.go:155] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.0.212, ResourceVersion: 0, AdditionalErrorMsg:
I0308 20:17:17.394171    9319 controller.go:107] OpenAPI AggregationController: Processing item
I0308 20:17:17.394300    9319 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0308 20:17:17.394391    9319 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0308 20:17:17.431078    9319 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0308 20:17:17.445315    9319 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0308 20:17:17.445429    9319 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0308 20:17:18.939225    9319 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0308 20:17:19.103599    9319 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0308 20:17:19.434785    9319 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.0.212]
I0308 20:17:19.438264    9319 controller.go:606] quota admission added evaluator for: endpoints
INFO[2020-03-08T20:17:19.626006745+08:00] Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0
INFO[2020-03-08T20:17:19.629360634+08:00] Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true
I0308 20:17:19.655674    9319 controllermanager.go:161] Version: v1.17.3+k3s1
I0308 20:17:19.657868    9319 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
INFO[2020-03-08T20:17:19.658915754+08:00] Waiting for cloudcontroller rbac role to be created
INFO[2020-03-08T20:17:19.664219593+08:00] Creating CRD addons.k3s.cattle.io
INFO[2020-03-08T20:17:19.686759095+08:00] Creating CRD helmcharts.helm.cattle.io
W0308 20:17:19.691464    9319 authorization.go:47] Authorization is disabled
W0308 20:17:19.691548    9319 authentication.go:92] Authentication is disabled
I0308 20:17:19.691602    9319 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
INFO[2020-03-08T20:17:19.762954369+08:00] Waiting for CRD helmcharts.helm.cattle.io to become available
INFO[2020-03-08T20:17:20.280122513+08:00] Done waiting for CRD helmcharts.helm.cattle.io to become available
INFO[2020-03-08T20:17:20.320314726+08:00] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz
INFO[2020-03-08T20:17:20.321243908+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2020-03-08T20:17:20.321700333+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml
INFO[2020-03-08T20:17:20.322113756+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml
INFO[2020-03-08T20:17:20.322546305+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml
INFO[2020-03-08T20:17:20.322947895+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml
INFO[2020-03-08T20:17:20.323365277+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml
INFO[2020-03-08T20:17:20.323768992+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml
INFO[2020-03-08T20:17:20.324187832+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml
INFO[2020-03-08T20:17:20.324764300+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml
INFO[2020-03-08T20:17:20.325164932+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml
INFO[2020-03-08T20:17:20.325563897+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml
INFO[2020-03-08T20:17:20.325996113+08:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml
INFO[2020-03-08T20:17:20.527152139+08:00] Starting k3s.cattle.io/v1, Kind=Addon controller
INFO[2020-03-08T20:17:20.529057920+08:00] Node token is available at /var/lib/rancher/k3s/server/token
INFO[2020-03-08T20:17:20.529286008+08:00] To join node to cluster: k3s agent -s https://192.168.0.212:6443 -t ${NODE_TOKEN}
INFO[2020-03-08T20:17:20.527299933+08:00] Waiting for master node  startup: resource name may not be empty
INFO[2020-03-08T20:17:20.725057527+08:00] Waiting for cloudcontroller rbac role to be created
I0308 20:17:20.810135    9319 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
INFO[2020-03-08T20:17:20.830872877+08:00] Starting helm.cattle.io/v1, Kind=HelmChart controller
INFO[2020-03-08T20:17:20.831652182+08:00] Starting batch/v1, Kind=Job controller
INFO[2020-03-08T20:17:20.832798326+08:00] Starting /v1, Kind=Service controller
INFO[2020-03-08T20:17:20.833998971+08:00] Starting /v1, Kind=Pod controller
INFO[2020-03-08T20:17:20.835217450+08:00] Starting /v1, Kind=Endpoints controller
INFO[2020-03-08T20:17:20.836690891+08:00] Starting /v1, Kind=Secret controller
INFO[2020-03-08T20:17:20.838569298+08:00] Starting /v1, Kind=Node controller

集群状态

╭─root@OpenWrt ~
╰─# ks cluster-info
Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
╭─root@OpenWrt ~
╰─# ks get node
NAME      STATUS   ROLES    AGE     VERSION
openwrt   Ready    master   5m26s   v1.17.3+k3s1
╭─root@OpenWrt ~
╰─# ks get pod -n kube-system
NAME                                      READY   STATUS              RESTARTS   AGE
metrics-server-6d684c7b5-8gv8j            1/1     Running             0          5m20s
local-path-provisioner-58fb86bdfd-h8jkk   1/1     Running             0          5m20s
svclb-traefik-9kwxx                       0/2     ContainerCreating   0          3m29s
helm-install-traefik-nw9td                0/1     Completed           2          5m20s
coredns-d798c9dd-62sb9                    1/1     Running             0          5m20s
traefik-6787cddb4b-p6hs9                  1/1     Running             0          3m30s
╭─root@OpenWrt ~
╰─# ks describe node
Name:               openwrt
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/hostname=openwrt
                    k3s.io/internal-ip=192.168.0.212
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=openwrt
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ca:8f:da:03:f3:e5"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.0.212
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 08 Mar 2020 20:17:23 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  openwrt
  AcquireTime:     <unset>
  RenewTime:       Sun, 08 Mar 2020 20:23:44 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----               ------  -----------------               ------------------              ------                     -------
  NetworkUnavailable   False   Sun, 08 Mar 2020 20:17:38 +0800   Sun, 08 Mar 2020 20:17:38 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sun, 08 Mar 2020 20:19:54 +0800   Sun, 08 Mar 2020 20:17:23 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 08 Mar 2020 20:19:54 +0800   Sun, 08 Mar 2020 20:17:23 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 08 Mar 2020 20:19:54 +0800   Sun, 08 Mar 2020 20:17:23 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 08 Mar 2020 20:19:54 +0800   Sun, 08 Mar 2020 20:17:34 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.0.212
  Hostname:    openwrt
Capacity:
  cpu:                4
  ephemeral-storage:  925844Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             1851688Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  900661043
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             1851688Ki
  pods:               110
System Info:
  Machine ID:                 96581db4e82a9fb36b0553115e64de1a
  System UUID:
  Boot ID:                    87856a60-0482-4ca3-a144-2ec073e1d2c7
  Kernel Version:             5.4.23-amlogic-flippy-28+
  OS Image:                   OpenWrt SNAPSHOT
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.3.3-k3s1
  Kubelet Version:            v1.17.3+k3s1
  Kube-Proxy Version:         v1.17.3+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://openwrt
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                 ----                                     ------------  ----------  ---------------  -------------  ---
  kube-system                 metrics-server-6d684c7b5-8gv8j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
  kube-system                 local-path-provisioner-58fb86bdfd-h8jkk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
  kube-system                 svclb-traefik-9kwxx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
  kube-system                 coredns-d798c9dd-62sb9                     100m (2%)     0 (0%)      70Mi (3%)        170Mi (9%)     6m10s
  kube-system                 traefik-6787cddb4b-p6hs9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------         -------- ------
  cpu                100m (2%)  0 (0%)
  memory             70Mi (3%)  170Mi (9%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                    From                 Message
  ----   ------                 ----                 ----               -------
  Normal   Starting                 6m24s                  kubelet, openwrt     Starting kubelet.
  Warning  InvalidDiskCapacity      6m24s                  kubelet, openwrt     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientPID     6m23s (x2 over 6m24s)  kubelet, openwrt     Node openwrt status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  6m23s (x2 over 6m24s)  kubelet, openwrt     Node openwrt status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    6m23s (x2 over 6m24s)  kubelet, openwrt     Node openwrt status is now: NodeHasNoDiskPressure
  Normal   NodeAllocatableEnforced  6m23s                  kubelet, openwrt     Updated Node Allocatable limit across pods
  Normal   Starting                 6m23s                  kube-proxy, openwrt  Starting kube-proxy.
  Normal   NodeReady                6m13s                  kubelet, openwrt     Node openwrt status is now: NodeReady

结束啦

关于 OpenWrt 最重要的 透明代理旁路网关 还没有讲到,稍后再更新一下,毕竟我买了是玩儿 K3s 的 😂。总之这个垃圾还是值得捡的,准备再捡几台?部署个 K3s 集群?高可用?五节点?仨 master 俩 node?(逃

不过意外发现该盒子的 USB 口竟然支持使用 echo 命令来修改 /sys/bus/usb/devices/usb1/power/level 的值来控制 USB 的电源状态,这样从 USB 口引出正负极接在继电器上,这就是一个通过网络控制的开关。我在 R6300V2WNDR3700V4 上梦寐以求没有的功能居然在这个破盒子上有。意外收获!

╭─root@OpenWrt ~
╰─# tree /sys/bus/usb/devices/usb1/power/
/sys/bus/usb/devices/usb1/power/
├── active_duration
├── autosuspend
├── autosuspend_delay_ms
├── connected_duration
├── control
├── level
├── runtime_active_time
├── runtime_status
├── runtime_suspended_time
├── wakeup
├── wakeup_abort_count
├── wakeup_active
├── wakeup_active_count
├── wakeup_count
├── wakeup_expire_count
├── wakeup_last_time_ms
├── wakeup_max_time_ms
└── wakeup_total_time_ms

最后祝大家 Happy Hacking