之前团队的nginx管理,都是运维同学每次去修改配置文件,然后重启,非常不方便,一直想找一个可以方便管理nginx集群的工具,翻遍web,未寻到可用之物,于是自己设计开发了一个。
效果预览
如果想学习Java工程化、高性能及分布式、深入浅出。微服务、Spring,MyBatis,Netty源码分析的朋友可以加我的Java高级交流:854630135,群里有阿里大牛直播讲解技术,以及Java大型互联网技术的视频免费分享给大家。
集群group管理界面
可以管理group的节点,配置文件,修改后可以一键重启所有节点,且配置文件出错时会提示错误,不会影响线上服务。
2.集群Node节点管理
3 .集群Node节点日志查看
生成的配置文件预览
vhost管理
设计思路
数据结构:
一个nginxGroup,拥有多个NginxNode,共享同一份配置文件。
分布式架构:Manager节点+agent节点+web管理
每个nginx机器部署一个agent,agent启动后自动注册到manager,通过web可以设置agent所属group,以及管理group的配置文件。
配置文件变更后,manager生成配置文件,分发给存活的agent,检验OK后,控制agent重启nginx。
关键技术点
分布式管理
一般分布式可以借助zookeeper等注册中心来实现,作为java项目,其实使用EurekaServer就可以了:
manager加入eureka依赖:
org.springframework.cloud
spring-cloud-starter
org.springframework.cloud
spring-cloud-starter-netflix-eureka-server
然后在入口程序添加 @EnableEurekaServer
agent 添加注册配置:
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:admin@ip:3002/eureka/
manager 节点获取存活的agent,可以通过EurekaServerContextHolder来获取注册的agent,同时可以通过定时任务自动发现新节点。
public class NginxNodeDiscover {
private static final String AGENT_NAME = "XNGINXAGENT";
private PeerAwareInstanceRegistry getRegistry() {
return getServerContext().getRegistry();
}
private EurekaServerContext getServerContext() {
return EurekaServerContextHolder.getInstance().getServerContext();
}
@Autowired
NginxNodeRepository nginxNodeRepository;
@Scheduled(fixedRate = 60000)
public void discoverNginxNode() {
List nodes = getAliveAgents();
nodes.stream().forEach(node->{
if(!nginxNodeRepository.findByAgent(node).isPresent()){
NginxNode nginxNode = new NginxNode();
nginxNode.setAgent(node);
nginxNode.setName(node);
nginxNodeRepository.save(nginxNode);
}
});
}
public List getAliveAgents() {
List instances = new ArrayList<>();
List sortedApplications = getRegistry().getSortedApplications();
Optional targetApp = sortedApplications.stream().filter(a->a.getName().equals(AGENT_NAME)).findFirst();
if(targetApp.isPresent()){
Application app = targetApp.get();
for (InstanceInfo info : app.getInstances()) {
instances.add(info.getHomePageUrl());
}
}
return instances;
}
}
RPC调用
manager 需要控制agent,按最简单的方案,agent提供rest服务,从Eureka获取地址后直接调用就可以了,另外可以借助feign来方便调用。
如果想学习Java工程化、高性能及分布式、深入浅出。微服务、Spring,MyBatis,Netty源码分析的朋友可以加我的Java高级交流:854630135,群里有阿里大牛直播讲解技术,以及Java大型互联网技术的视频免费分享给大家。
定义接口:
public interface NginxAgentManager {
@RequestLine("GET /nginx/start")
RuntimeBuilder.RuntimeResult start() ;
@RequestLine("GET /nginx/status")
RuntimeBuilder.RuntimeResult status() ;
@RequestLine("GET /nginx/reload")
RuntimeBuilder.RuntimeResult reload() ;
@RequestLine("GET /nginx/stop")
RuntimeBuilder.RuntimeResult stop();
@RequestLine("GET /nginx/testConfiguration")
RuntimeBuilder.RuntimeResult testConfiguration();
@RequestLine("GET /nginx/kill")
RuntimeBuilder.RuntimeResult kill() ;
@RequestLine("GET /nginx/restart")
RuntimeBuilder.RuntimeResult restart() ;
@RequestLine("GET /nginx/info")
NginxInfo info();
@RequestLine("GET /nginx/os")
OperationalSystemInfo os() ;
@RequestLine("GET /nginx/accesslogs/{lines}")
List getAccesslogs(@Param("lines") int lines);
@RequestLine("GET /nginx/errorlogs/{lines}")
List getErrorLogs(@Param("lines") int lines);
}
agent 实现功能:
@RestController
@RequestMapping("/nginx")
public class NginxResource {
...
@PostMapping("/update")
@Timed
public String update(@RequestBody NginxConf conf){
if(conf.getSslDirectives()!=null){
for(SslDirective sslDirective : conf.getSslDirectives()){
nginxControl.conf(sslDirective.getCommonName(),sslDirective.getContent());
}
}
return updateConfig(conf.getConf());
}
@GetMapping("/accesslogs/{lines}")
@Timed
public List getAccesslogs(@PathVariable Integer lines) {
return nginxControl.getAccessLogs(lines);
}
}
manager 调用;
先生成一个Proxy实例,其中nodeurl是agent节点的url地址
public NginxAgentManager getAgentManager(String nodeUrl){
return Feign.builder()
.options(new Request.Options(1000, 3500))
.retryer(new Retryer.Default(5000, 5000, 3))
.requestInterceptor(new HeaderRequestInterceptor())
.encoder(new GsonEncoder())
.decoder(new GsonDecoder())
.target(NginxAgentManager.class, nodeUrl);
}
然后调用就简单了,比如要启动group:
public void start(String groupId){
operateGroup(groupId,((conf, node) -> {
NginxAgentManager manager = getAgentManager(node.getAgent());
String result = manager.update(conf);
if(!result.equals("success")){
throw new XNginxException("node "+ node.getAgent()+" update config file failed!");
}
RuntimeBuilder.RuntimeResult runtimeResult = manager.start();
if(!runtimeResult.isSuccess()){
throw new XNginxException("node "+ node.getAgent()+" start failed,"+runtimeResult.getOutput());
}
}));
}
public void operateGroup(String groupId,BiConsumer action){
List alivedNodes = nodeDiscover.getAliveAgents();
if(alivedNodes.size() == 0){
throw new XNginxException("no alived agent!");
}
List nginxNodes = nodeRepository.findAllByGroupId(groupId);
if(nginxNodes.size() ==0){
throw new XNginxException("the group has no nginx Nodes!");
}
NginxConf conf = nginxConfigService.genConfig(groupId);
for(NginxNode node : nginxNodes){
if(!alivedNodes.contains(node.getAgent())){
continue;
}
action.accept(conf, node);
}
}
Nginx 配置管理
nginx的核心是各种Directive(指令),最核心的是vhost和Location。
我们先来定义VHOST:
public class VirtualHostDirective implements Directive {
private Integer port = 80;
private String aliases;
private boolean enableSSL;
private SslDirective sslCertificate;
private SslDirective sslCertificateKey;
private List locations;
private String root;
private String index;
private String access_log;
}
其中核心的LocationDirective,设计思路是passAddress存储location的目标地址,可以是url,也可以是upstream,通过type来区分,同时如果有upstream,则通过proxy来设置负载信息。
public class LocationDirective {
public static final String PROXY = "PROXY";
public static final String UWSGI = "UWSGI";
public static final String FASTCGI = "FASTCGI";
public static final String COMMON = "STATIC";
private String path;
private String type = COMMON;
private ProxyDirective proxy;
private List rewrites;
private String advanced;
private String passAddress;
}
再来看ProxyDirective,通过balance来区分是普通的url还是upstream,如果是upstream,servers存储负载的服务器。
public class ProxyDirective implements Directive {
public static final String BALANCE_UPSTREAM = "upstream";
public static final String BALANCE_URL = "url";
private String name;
private String strategy;
/**
* Upstream balance type : upsteam,url
*/
private String balance = BALANCE_UPSTREAM;
private List servers;
}
历史数据导入
已经有了配置信息,可以通过解析导入系统,解析就是常规的文本解析,这里不再赘述。
核心思想就是通过匹配大括号,将配置文件分成block,然后通过正则等提取信息,比如下面的代码拆分出server{...}
private List blocks() {
List blocks = new ArrayList<>();
List lines = Arrays.asList(fileContent.split(""));
AtomicInteger atomicInteger = new AtomicInteger(0);
AtomicInteger currentLine = new AtomicInteger(1);
Integer indexStart = 0;
Integer serverStartIndex = 0;
for (String line : lines) {
if (line.contains("{")) {
atomicInteger.getAndIncrement();
if (line.contains("server")) {
indexStart = currentLine.get() - 1;
serverStartIndex = atomicInteger.get() - 1;
}
} else if (line.contains("}")) {
atomicInteger.getAndDecrement();
if (atomicInteger.get() == serverStartIndex) {
if (lines.get(indexStart).trim().startsWith("server")) {
blocks.add(StringUtils.join(lines.subList(indexStart, currentLine.get()), ""));
}
}
}
currentLine.getAndIncrement();
}
return blocks;
}
配置文件生成
配置文件生成,一般是通过模板引擎,这里也不例外,使用了Velocity库。
public static StringWriter mergeFileTemplate(String pTemplatePath, Map pDto) {
if (StringUtils.isEmpty(pTemplatePath)) {
throw new NullPointerException("????????????");
}
StringWriter writer = new StringWriter();
Template template;
try {
template = ve.getTemplate(pTemplatePath);
} catch (Exception e) {
throw new RuntimeException("????????", e);
}
VelocityContext context = VelocityHelper.convertDto2VelocityContext(pDto);
try {
template.merge(context, writer);
} catch (Exception e) {
throw new RuntimeException("????????", e);
}
return writer;
}
定义模板:
#if(${config.user})user ${config.user};#end
#if(${config.workerProcesses}== 0 )
worker_processes auto;
#else
worker_processes ${config.workerProcesses};
#end
pid /opt/xnginx/settings/nginx.pid;
events {
multi_accept off;
worker_connections ${config.workerConnections};
}
...
生成配置文件;
public static StringWriter buildNginxConfString(ServerConfig serverConfig, List hostDirectiveList, List proxyDirectiveList) {
Map map = new HashMap<>();
map.put("config",serverConfig);
map.put("upstreams", proxyDirectiveList);
map.put("hosts",hostDirectiveList);
return VelocityHelper.mergeFileTemplate(NGINX_CONF_VM, map);
}