如何在 Debian 8 上为 MySQL 安装 Percona XtraDB Cluster如何在 Debian 8 上为 MySQL 安装 Percona XtraDB Cluster如何在 Debian 8 上为 MySQL 安装 Percona XtraDB Cluster如何在 Debian 8 上为 MySQL 安装 Percona XtraDB Cluster
  • 文章
  • 正则表达式
    • 工具
  • 登录
找到的结果: {phrase} (显示: {results_count} 共: {results_count_total})
显示: {results_count} 共: {results_count_total}

加载更多搜索结果...

搜索范围
模糊匹配
搜索标题
搜索内容
发表 admin at 2025年2月28日
类别
  • 未分类
标签

如何在 Debian 8 上为 MySQL 安装 Percona XtraDB Cluster

在此页

  1. 在 Debian 8 上配置 Percona XtraDB 集群
  2. 先决条件
  3. 第 1 步:安装 Percona Xtradb 集群
  4. 第 2 步。配置第一个节点
  5. 第 3 步。配置第二个节点
  6. 第 4 步。配置第三个节点
  7. 测试复制

在本文中,我将向您展示如何在多主复制中配置具有三个节点的 MySQL 数据库集群。多主复制允许在每个节点中写入记录,因此如果一个节点出现故障,我们可以在另一个节点上工作,就好像什么都没发生一样。

Percona的官方文档可以在官网找到

首先,为什么我们选择三个节点而不是两个?在任何集群中,节点的数量应该是奇数,所以在节点断开的情况下,我们假设最高的一组服务器有新鲜数据,应该复制到宕机节点以避免数据丢失。这仅与解决数据复制中的冲突有关,我们不会丢失仅写入断开连接节点的数据。

这用于避免称为脑裂的情况,在这种情况下我们无法自动选择哪个节点具有正确的数据。例如,考虑一个 2 节点集群,其中两个节点彼此断开连接,并且将相同的记录写入两个节点:当它们重新联机时谁赢了?我们不知道,所以脑裂发生了,我们必须手动决定哪个记录是正确的。

确定集群的哪个部分具有正确数据所需的节点数称为 QUORUM,在我们的例子中,法定人数为 2。因此我们需要 2 个服务器始终相互连接。万一所有三个节点都出现故障,我们就会出现裂脑,我们必须决定哪个服务器应该手动进入引导程序模式,这是确定哪个主服务器将从裂脑中恢复的过程。

在 Debian 8 上配置 Percona XtraDB 集群

本教程介绍如何在 Debian 8 服务器上安装和配置三个 Percona XtraDB 集群节点,我们将使用 Percona 存储库中的软件包。

  • 服务器1
    • 主机名:mysql1.local.vm
    • IP 地址:192.168.152.100

    • 主机名:mysql2.local.vm
    • IP 地址:192.168.152.110

    • 主机名:mysql3.local.vm
    • IP 地址:192.168.152.120

    在每台主机上,按如下方式修改文件 /etc/hosts 以确保 DNS 正常工作。

    127.0.0.1 localhost
    192.168.152.100 mysql1.local.vm mysql1
    192.168.152.110 mysql2.local.vm mysql2
    192.168.152.120 mysql3.local.vm mysql3

    # The following lines are desirable for IPv6 capable hosts
    ::1 localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters

    先决条件

    本教程中描述的过程需要以下最少的服务器设置:

    • 所有三个节点都有 Debian 8,我建议您遵循本指南 https://linux教程/tutorial/debian-8-jessie-minimal-server/

    步骤 1. 安装 Percona Xtradb 集群

    在所有节点上,以 root 身份执行以下命令:

    wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
    dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
    apt-get update
    apt-get -y install percona-xtradb-cluster-57

    显然,输入你喜欢选择的mysql密码。

    安装包后,mysqld 将自动启动。使用 /etc/init.d/mysql stop 在所有三个节点上停止 mysqld。

    步骤 2. 配置第一个节点

    应将各个节点配置为能够引导集群。有关引导集群的更多信息,请参阅引导集群。

    1. Make sure to add these lines to configuration file /etc/mysql/my.cnf for the first node (mysql1.local.vm) at the end of [mysqld] section:

      [mysqld]

      ... # Path to Galera library
      wsrep_provider=/usr/lib/libgalera_smm.so

      # Cluster connection URL contains the IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW

      # MyISAM storage engine has only experimental support
      default_storage_engine=InnoDB

      # This InnoDB autoincrement locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2

      # Node #1 address
      wsrep_node_address=192.168.152.100

      # SST method
      wsrep_sst_method=xtrabackup-v2

      # Cluster name
      wsrep_cluster_name=my_ubuntu_cluster

      # Authentication for SST method
      wsrep_sst_auth="sstuser:PASSW0RD"

      Pay attention to the password you setup there in my case "PASSW0RD".

    2. Start the first node with the following command:

      :~# /etc/init.d/mysql bootstrap-pxc
      

      This command will start the first node and bootstrap the cluster, you will see something like this if all is ok:

      :~#
    3. After the first node has been started, connect to mysql with classic mysql -p command, then cluster status can be checked executing the query show status like 'wsrep%';as in the example below:

    4. mysql> show status like 'wsrep%';
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 1                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      59 rows in set (0.00 sec)
      

    此输出显示集群已成功引导。

    执行特权:

    mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'PASSW0RD';
    mysql> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
    mysql> FLUSH PRIVILEGES;
    

    步骤 3. 配置第二个节点

    1. Append the following lines to the configuration file /etc/mysql/my.cnf on the second node (mysql2.local.vm), so that it contains the following data:

      [mysqld]

      ... # Path to Galera library
      wsrep_provider=/usr/lib/libgalera_smm.so

      # Cluster connection URL contains the IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW

      # MyISAM storage engine has only experimental support
      default_storage_engine=InnoDB

      # This InnoDB autoincrement locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2

      # Node #2 address
      wsrep_node_address=192.168.152.110

      # SST method
      wsrep_sst_method=xtrabackup-v2

      # Cluster name
      wsrep_cluster_name=my_ubuntu_cluster

      # Authentication for SST method
      wsrep_sst_auth="sstuser:PASSW0RD"
    2. Start the second node with the following command (attention this time as you can see is not in boostrap mode!!):

      :~# /etc/init.d/mysql start
      
    3. After the server has been started, it should receive SST automatically. Cluster status can now be checked on both nodes. The following is an example of the status from the second node (mysql2.local.vm):

    4. mysql> show status like 'wsrep%';
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 2                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      40 rows in set (0.01 sec)
      

      This output shows that the new node has been successfully added to the cluster. Note the variable wsrep_cluster_size wich has become 2, instead of one of the first query we made.

    步骤 4. 配置第三个节点

    1. Append the following lines to the configuration file /etc/mysql/my.cnf on the second node (mysql3.local.vm), so it contains the following configuration:

      [mysqld]

      ... # Path to Galera library
      wsrep_provider=/usr/lib/libgalera_smm.so

      # Cluster connection URL contains the IPs of node#1, node#2 and node#3
      wsrep_cluster_address=gcomm://192.168.152.100,192.168.152.110,192.168.152.120

      # In order for Galera to work correctly binlog format should be ROW
      binlog_format=ROW

      # MyISAM storage engine has only experimental support
      default_storage_engine=InnoDB

      # This InnoDB autoincrement locking mode is a requirement for Galera
      innodb_autoinc_lock_mode=2

      # Node #2 address
      wsrep_node_address=192.168.152.120

      # SST method
      wsrep_sst_method=xtrabackup-v2

      # Cluster name
      wsrep_cluster_name=my_ubuntu_cluster

      # Authentication for SST method
      wsrep_sst_auth="sstuser:PASSW0RD"
    2. Start the third node with the following command:

      :~# /etc/init.d/mysql start
      
    3. After the server has been started, it should receive SST automatically. Cluster status can be checked on all nodes. The following is an example of status from the third node (mysql3.local.vm):

      mysql> show status like 'wsrep%';
      +----------------------------+--------------------------------------+
      | Variable_name              | Value                                |
      +----------------------------+--------------------------------------+
      | wsrep_local_state_uuid     | 0251a27c-8a19-11e6-905b-f3f13b0ddc5b |
      ...
      | wsrep_local_state          | 4                                    |
      | wsrep_local_state_comment  | Synced                               |
      ...
      | wsrep_cluster_size         | 3                                    |
      | wsrep_cluster_status       | Primary                              |
      | wsrep_connected            | ON                                   |
      ...
      | wsrep_ready                | ON                                   |
      +----------------------------+--------------------------------------+
      40 rows in set (0.01 sec)
      

      This output confirms that the third node has joined the cluster. Again take a look at wsrep_cluster_size wich now has become 3, instead of 2.

    如果你遇到一些问题,看看 /var/log/syslog 看看是否一切正常

    Oct 4 12:16:13 mysql3 mysql[2767]: Starting MySQL (Percona XtraDB Cluster) database server: mysqld . . .State transfer in progress, setting sleep higher: mysqld . ..
    Oct 4 12:16:13 mysql3 systemd[1]: Started LSB: Start and stop the mysql (Percona XtraDB Cluster) daemon.
    Oct 4 12:17:01 mysql3 CRON[3731]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)

    在这个例子中,一切顺利,您可以看到 State Transfer 正在进行中,这意味着数据将被传输到节点。

    测试复制

    为了测试复制,让我们在第二个节点上创建一个新数据库,在第三个节点上为该数据库创建一个表,然后将一些记录添加到第一个节点上的表中。

    1. Create a new database on the second node:

      mysql@mysql2> CREATE DATABASE percona;
      Query OK, 1 row affected (0.01 sec)
      
    2. Create a table on the third node:

      mysql@mysql3> USE percona;
      Database changed
      
      mysql@pxc3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
      Query OK, 0 rows affected (0.05 sec)
      
    3. Insert records on the first node:

      mysql@mysql1> INSERT INTO percona.example VALUES (1, 'percona1');
      Query OK, 1 row affected (0.02 sec)
      
    4. Retrieve all the rows from that table on the second node:

      mysql@mysql2> SELECT * FROM percona.example;
      +---------+-----------+
      | node_id | node_name |
      +---------+-----------+
      |       1 | percona1  |
      +---------+-----------+
      1 row in set (0.00 sec)
      

    为了保证你的应用总能到达集群,你可以在三个节点前加一个负载均衡器。

©2015-2025 艾丽卡 support@alaica.com