0%

harbor高可用安装(离线方式)

本文实验参考微信公众号harbor进阶实践

1 环境说明

1.1 架构图

将Harbor的redis缓存组件、PostgreSQL数据库组件迁移到系统外部做高可用,使用外部共享存储实现多个Harbor实例的数据共享,Harbor实例可横向扩展。

1.2 主机清单

ip地址 主机名 描述
192.168.88.131 harbor-data 部署Harbor实例的共享存储、外部数据库、外部缓存服务
192.168.88.138 harbo1 Harbor实例1,8021端口
192.168.88.139 harbor2 Harbor实例2,8021端口
192.168.88.121 / 负载均衡VIP,8121端口

1.3 服务版本

服务 版本要求 安装版本
Harbor / 2.3.5
Docker 17.06.0+ 19.03.8
Docker-compose 1.18.0+ v2.2.3
Redis 6.0.16 6.2.7

2 主机初始化

2.1 安装docker

$ wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum install -y docker-ce
$ systemctl enable --now docker
$ systemctl status docker
$ cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://xcg41ct3.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://3hjcmqfe.mirror.aliyuncs.com"],
"log-driver": "json-file",
"log-opts": {
"max-size": "500m",
"max-file": "2"
}
}
EOF
$ systemctl daemon-reload
$ systemctl restart docker

exec-opts”: [“native.cgroupdriver=systemd”], #驱动器 registry-mirrors: 镜像加速地址,可多个 max-file: log 最多保留数量 live-restore: 重启 docker 不重启容器,多用于 k8s 上

2.2 安装docker-compose

$ wget https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 # 下载前先确认链接有效性
$ mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
$ docker-compose version
Docker Compose version v2.2.3

2.3 配置内核参数

$ modprobe br_netfilter
$ cat >> /etc/sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 #路由转发
EOF
$ sysctl -p

如果提示sysctl: cannot stat /proc/sys/net/ipv4/ip_forward : No such file or director
可能是conntrack没有加载,执行lsmod |grep conntrack 查看,执行modprobe ip_conntrack加载

3 使用NFS提供外部共享存储

3.1 部署NFS服务端

  • 安装并启动nfs

    $ yum  install -y  nfs-utils 
    $ systemctl start nfs && systemctl enable nfs && systemctl status nfs
    $ chkconfig nfs on #设置为开机自启
  • 创建共享目录

客户端的数据将远程存入到共享目录下。

[root@harbor-data ~]# mkdir -p /data/harbor_data
  • 修改配置

    [root@harbor-data harbor_data]# cat /etc/exports
    /data/harbor_data 192.168.88.0/24(rw,no_root_squash)
  • 重启nfs服务

    [root@harbor-data harbor_data]# systemctl restart nfs
    [root@harbor-data harbor_data]# systemctl status nfs
    ● nfs-server.service - NFS server and services
    Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
    Active: active (exited) since Fri 2023-04-07 06:50:46 CST; 3s ago
    Process: 2020 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
    Process: 2017 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
    Process: 2015 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
    Process: 2050 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
    Process: 2035 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
    Process: 2034 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Main PID: 2035 (code=exited, status=0/SUCCESS)
    Tasks: 0
    Memory: 0B
    CGroup: /system.slice/nfs-server.service

    Apr 07 06:50:45 harbor-data systemd[1]: Starting NFS server and services...
    Apr 07 06:50:46 harbor-data systemd[1]: Started NFS server and services.

  • 检查共享目录信息

    [root@harbor-data harbor_data]# showmount -e localhost
    Export list for localhost:
    /data/harbor_data 192.168.88.0/24

3.2 部署客户端

在harbor1和harbor2上操作

$ yum -y install nfs-utils
$ systemctl start nfs-utils && systemctl enable nfs-utils && systemctl status nfs-utils

3.3 客户端挂在NFS共享存储

在 harbor1 和 harbor2 节点操作, 创建实例的存储目录,然后挂载到 NFS。

[root@harbor1 harbor]# mkdir -pv /data/harbor_data
mkdir: created directory ‘/data/harbor_data’
[root@harbor1 harbor]# cat << EOF >> /etc/fstab
> 192.168.88.131:/data/harbor_data /data/harbor_data nfs defaults 0 0
> EOF
[root@harbor1 harbor]# mount -a

挂载格式:NFSIP: 共享目录 本地目录 nfs defaults 0 0

测试是否可以正常使用

4 部署Redis缓存服务(源码)

在harbor-data上部署redis,为harbor1和harbor2实例提供外部redis缓存服务

4.1 下载安装包

$ wget https://download.redis.io/releases/redis-6.2.7.tar.gz

4.2 安装依赖包

$ yum install -y gcc gcc-c++

4.3 源码编译

$ mkdir -p /data/app/
$ tar zxvf redis-6.2.7.tar.gz -C /data/app
$ cd /data/app/redis-6.2.7/
$ make #编译
$ make install #安装, 这里不加参数默认就是/usr/local/bin目录下

4.4 修改配置文件

redis 默认只支持本地使用, 本处需要修改几个参数:

  • 外部可连接;
  • redis 启动方式;
  • redis 远程连接密码;
    bind 192.168.88.** #75行,允许内网主机连接;
    daemonize yes #259行,将no修改为yes,使redis可以使用守护进程方式启动;
    requirepass 123456 #903行,设置redis连接的auth密码

4.5 启动redis服务

前面配置了使用守护进程方式启动,所以直接使用systemctl 则可以启动redis服务

[root@habor-data redis-6.2.7]# pwd
/data/app/redis-6.2.7
[root@habor-data redis-6.2.7]# redis-server redis.conf

4.6 服务验证

1)查看redis服务版本

$ redis-cli -v  
redis-cli 6.2.7

2)查看端口

redis默认监听6379端口

[root@habor-data ~]# ps -ef | grep 6379
root 6504 1 0 04:52 ? 00:00:47 redis-server *:6379
root 8882 8864 0 15:12 pts/0 00:00:00 grep --color=auto 6379

3)客户端连接Redis
harbor1harbor2作为redis客户端

$ which redis-cli      #查看redis-cli工具位置  
/usr/local/bin/redis-cli
[root@habor-data ~]# scp /usr/local/bin/redis-cli 192.168.88.138:/usr/local/bin/
[root@habor-data ~]# scp /usr/local/bin/redis-cli 192.168.88.139:/usr/local/bin/

客户端使用redis-cli工具连接redis服务器

[root@harbor1 ~]# redis-cli -h 192.168.88.131 
192.168.88.131:6379> auth 123456
OK

也可以使用-a参数指定密码

5 部署PostgreSQL外部存储服务(源码)

5.1 新建postgres用户

默认超级用户(root)不能启动postgresql,需要手动建用户postgres。

[root@habor-data ~]# useradd postgres
[root@habor-data ~]# id postgres
uid=1000(postgres) gid=1000(postgres) groups=1000(postgres)

5.2 安装依赖包

$ yum -y install readline-devel  zlib-devel  gcc zlib

5.3 下载解压源码包

$ wget https://ftp.postgresql.org/pub/source/v13.5/postgresql-13.5.tar.gz --no-check-certificate  
$ tar zxvf postgresql-13.5.tar.gz -C /app/

5.4 编译安装

$ cd /data/app/postgresql-13.5/  
$ ./configure --prefix=/usr/local/postgresql
$ make && make install

5.5 创建数据目录

$ mkdir -p /data/postgresql/data  
$ chown -R postgres:postgres /usr/local/postgresql/
$ chown -R postgres:postgres /data/postgresql/data/

5.6 设置postgres环境变量

[root@harbor-data postgresql-13.5]# su  - postgres  
[postgres@harbor-data ~]$ vim ~/.bash_profile
PGHOME=/usr/local/postgresql #psql安装目录
export PGHOME
PGDATA=/data/postgresql/data #数据库目录
export PGDATA
PATH=$PATH:$HOME/bin:$HOME/.local/bin:$PGHOME/bin
export PATH
[postgres@harbor-data ~]$ source ./.bash_profile
[postgres@harbor-data ~]$ which psql
/usr/local/postgresql/bin/psql

查看版本 [postgres@harbor-data ~]$ psql -V psql (PostgreSQL) 13.5

5.7 初始化数据库

由于 Red Hat 系列发行版的政策,PostgreSQL 安装不会启用自动启动或自动初始化数据库。要完成数据库安装,您需要根据您的发行版执行以下步骤:

[postgres@habor-data postgresql]$ initdb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /data/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Asia/Shanghai
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

pg_ctl -D /data/postgresql/data -l logfile start

5.8 启动PostgreSQL

根据刚才初始化成功后的提示执行启动命令!

[postgres@habor-data postgresql]$ pg_ctl -D /data/postgresql/data -l logfile start
waiting for server to start.... done
server started

5.9 设置(修改)Postgresql密码

默认psql本地登录是不需要密码的,即使我们设置了密码,也不需要密码就能登录。应为配置文件pg_hba.conf中的local设置为trust , 为了安全我们修改为 password,就是使用密码才能登陆,(当我们忘记密码的时间,也可以使用这用方式,先设置为trust之后,修改密码,然后在设置为password。)

[postgres@habor-data postgresql]$ psql
psql (13.5)
Type "help" for help.

postgres=# \password
Enter new password:
Enter it again:
postgres=# \q

5.10 设置可远程登录PostgreSQL

[postgres@harbor-data ~]$ vim /data/postgresql/data/postgresql.conf  
listen_addresses = '*' #60行,监听所有地址
[postgres@harbor-data ~]$ vim + /data/postgresql/data/pg_hba.conf
local all all password
host all all 0.0.0.0/0 password
host all all ::1/128 password

5.11 重启PostgreSQL

[postgres@habor-data postgresql]$ pg_ctl -D /data/postgresql/data -l logfile restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started

5.12 创建数据库

Harbor 2.3.5需要创建的数据库:

  • notaryserver
  • notarysigner
  • registry

目前Harbor仅支持PostgraSQL数据库,需要手动在外部的PostgreSQL上创建registry、notary_signer、notary_servers三个数据库,Harbor启动时会自动在对应数据库下生成表。

因为本处主要是演示环境,PostgreSQL数据库的用户就以超级管理员postgres为例,如果是生产环境,建议新建用户,并授予harbor、notary_signer、notary_servers三个数据库相对应的权限。

[postgres@habor-data postgresql]$ psql
psql (13.5)
Type "help" for help.

postgres=# create database registry;
CREATE DATABASE
postgres=# create database notary_signer;
CREATE DATABASE
postgres=# create database notary_servers;
CREATE DATABASE
postgres=#
postgres=# \q

6 负载均衡设置(Nginx + Keepalived)

使用keepalived和Nginx实现harbor的高可用。在harbor1harbor2节点上安装keepalived服务来提供VIP实现负载均衡。Nginx服务则实现将来到VIP的请求转发到后端服务器组harbor

6.1 安装nginx和keepalived

在harbor1和harbor2操作

$ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo$ yum install -y nginx keepalived$ yum -y install nginx-all-modules.noarch     #安装nginx的stream模块

nginx从1.9.0开始新增了steam模块,用来实现四层协议的转发、代理、负载均衡等。二进制安装的nginx则在./configure时添加–with-stream参数来安装stream模块。

6.2 修改nginx配置文件

在harbor1和harbor2的Nginx服务配置文件一样。

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
upstream harbor{
server 192.168.88.138:8021; # harbor1
server 192.168.88.139:8021; # harbor2
}
server {
listen 8121; #由于nginx与harbor节点复用,这个监听端口不能是8021,否则会冲突
proxy_pass harbor;
}
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

}

检测nginx配置文件语法

$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful

6.3 修改keepalived 配置

本处以harbor1为keepalived服务的主节点,harbor2为keepalived的备节点,主备节点的keepalived的文件配置不一样

1)主节点(harbor1)

[root@harbor1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
454536188@qq.com
}
router_id master1
}

vrrp_instance lidabai {
state MASTER
interface ens33
mcast_src_ip 192.168.88.138
virtual_router_id 107
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.88.111/24 #虚拟VIP地址
}
track_script {
chk_nginx
}
}
##### 健康检查
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
}

2)备节点(harbo2)

[root@harbor2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
454536188@qq.com
}
router_id master2
}

vrrp_instance lidabai {
state BACKUP
interface ens33
mcast_src_ip 192.168.88.139
virtual_router_id 107
priority 80 #权重
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.88.111/24
}
track_script {
chk_nginx
}
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
}

6.4 编写健康检查脚本

在harbor1,harbor2添加

[root@harbor2 ~]# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
#1、判断Nginx是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ]; then
#2、如果不存活则尝试启动Nginx
service nginx start
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
service keepalived stop
fi
fi

6.5 启动服务

先启动master1和master2节点上的nginx服务,再启动keepalived服务

1)启动nginx服务

[root@harbor1 ~]# systemctl enable --now nginx   #启动nginx服务并设置开机自启[root@harbor2 ~]# systemctl enable --now nginx[root@harbor1 ~]# systemctl status nginx.service [root@harbor2 ~]# systemctl status nginx.service

如果报错 [emerg] bind() to 0.0.0.0:XXXX failed (13: Permission denied) 并且是root用户的话,可以检查是否开启了selinux

2)启动keepalived服务

[root@harbor1 ~]# systemctl enable --now keepalived[root@harbor2 ~]# systemctl enable --now keepalived[root@harbor1 ~]# systemctl status keepalived.service[root@harbor2 ~]# systemctl status keepalived.service

6.6 查看VIP

在harbor1节点查看VIP是否成功绑定。

通过ifconfig是无法查看到VIP的,通过hostname -I命令也可以查看到VIP。

7 部署harbor实例

7.1 在harbor1 主机上部署harbor服务

1 )下载解压离线安装包

$ mkdir /app   #创建安装目录  
$ wget https://github.com/goharbor/harbor/releases/download/v2.5.0/harbor-offline-installer-v2.3.5.tgz
$ tar zxvf harbor-offline-installer-v2.5.0.tgz -C /app/

2)修改配置文件

hostname: 192.168.88.138
http:
port: 8021
#取消https安全加密访问方式:
#https:
# port: 443
# certificate: /your/certificate/path
# private_key: /your/private/key/path
## 启用外部代理,启用后hostname将不再使用
external_url: https://192.168.88.111:8121
harbor_admin_password: Harbor12345
database:
password: root123
max_idle_conns: 100
max_open_conns: 900
## 配置共享存储,即挂载的NFS目录
data_volume: /data/harbor_data
trivy:
ignore_unfixed: false
skip_update: false
offline_scan: false
insecure: false
jobservice:
max_job_workers: 10
notification:
webhook_job_max_retry: 10
chart:
absolute_url: disabled
log:
level: info
local:
rotate_count: 50
rotate_size: 200M
location: /var/log/harbor
_version: 2.5.0
## 配置外部数据库
external_database:
harbor:
host: 192.168.88.131 # 数据库主机地址
port: 5432 # 数据库端口
db_name: registry # 数据库名称
username: postgres # 连接该数据库的用户名
password: 123456 # 连接数据库的密码
ssl_mode: disable
max_idle_conns: 2
max_open_conns: 0
notary_signer:
host: 192.168.88.131
port: 5432
db_name: notary_signer
username: postgres
password: 123456
ssl_mode: disable
notary_server:
host: 192.168.88.131
port: 5432
db_name: notary_server
username: postgres
password: 123456
ssl_mode: disable
# 配置外部Redis 实例
external_redis:
host: redis:6379 # redis服务IP地址和端口号。如果redis是哨兵模式,这里应该是host_sentinel1:port_sentinel1,host_sentinel2:port_sentinel2
password: 123456 # 连接外部redis服务的密码
registry_db_index: 1
jobservice_db_index: 2 # job服务的数据库索引
chartmuseum_db_index: 3 # chartmeseum插件的redis索引
trivy_db_index: 5 # trivy 扫描器的数据索引
idle_timeout_seconds: 30 # 超时时间
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- trivy
## 启用metrics数据采集插件:
metric:
enabled: false
port: 9090
path: /metrics
upload_purging:
enabled: true
age: 168h
interval: 24h
dryrun: false

3) 将配置文件注入到组件中

./prepare将harbor.yml配置文件的内容注入到各组件的配置文件中。

4) 安装harbor

安装期间会自动导入镜像,执行完成会自动启动Harbor服务。

[root@harbor1 harbor]# ./install.sh --with-trivy --with-chartmuseum


-Harbor has been installed and started successfully.- 表示安装成功!

5)查看服务状态

docker-compose ps查看服务状态

使用实例主机IP+端口在浏览器访问harbor UI

harbor1:

harbor2:

查看harbor日志:

8 服务验证

8.1 浏览器访问VIP和端口

http://192.168.88.111


8.2 命令行登录Harbor

$ docker login http://192.168.88.111:8121 -u admin -p Harbor12345

出现报错:
Error response from daemon: Get https://192.168.2.111:8121/v2/: http: server gave HTTP response to HTTPS client 在docker配置文件中添加参数:

cat /etc/docker/daemon.json
{
"exec-opts": [
"native.cgroupdriver=systemd"
],
"insecure-registries": ["192.168.88.111:8121", "192.168.88.138:8021", "192.168.88.139:8021"],
"registry-mirrors": [
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
# 重启docker
[root@habor-data ~]# systemctl daemon-reload
[root@habor-data ~]# systemctl restart docker

8.3 向Harbor推送镜像

[root@habor-data ~]# docker push 192.168.88.111:8121/library/alpine:3.16
The push refers to repository [192.168.88.111:8121/library/alpine]
5bc340f6d4f5: Pushed
3.16: digest: sha256:e28792ec7904bff56f22df296d78cc1188caf347bd824570d0ecf235e4f6e4cd size: 528

[root@habor-data ~]# docker push 192.168.88.111:8121/library/alpine:3.16
The push refers to repository [192.168.88.111:8121/library/alpine]
5bc340f6d4f5: Pushed
3.16: digest: sha256:e28792ec7904bff56f22df296d78cc1188caf347bd824570d0ecf235e4f6e4cd size: 528

然后可以在Harbor UI界面查看到镜像已经推送成功!

欢迎关注我的其它发布渠道