版本 | 日期 | 备注 |
---|---|---|
1.0 | 2020.5.23 | 文章首发 |
1.1 | 2020.8.16 | 增加鸟瞰部分 |
1.2 | 2020.8.21 | 增加小结,精炼语言,添加图片 |
1.3 | 2020.9.12 | 增加一致性描述 |
1.4 | 2021.6.23 | 标题从 |
当我们向zk发出一个数据更新请求时,这个请求的处理流程是什么样的?zk又是使用了什么共识算法来保证一致性呢?带着这些问题,我们进入今天的正文。
在分析源码之前,必须先和大家简单的科普一下责任链模式,因为这和本文的内容密切相关。简单的说:责任链模式将多个对象组成一条指责链,然后按照它们在职责链的顺序一个个地找出到底谁来负责处理。
那它的好处是什么呢?即松耦合发出请求者和处理者之间的关系:处理者们可以自由的推卸“请求”直到找到相应的处理者。如果处理者收到了不属于自己所需处理的请求时,只需转发下去即可,不需要编写额外的逻辑处理。
我们先从ZooKeeperServer
这个类入手,查看其实现类。我们需要关心的(常见的)zk服务器角色如下:
代码的入口在LeaderZooKeeperServer.setupRequestProcessors
,为了阅读体验,笔者在这里会先以视图的方式呈现逻辑组织。而喜欢阅读源码的同学可以阅读3.2
里的实现详解。
|-- LeaderRequestProcessor
\-- processRequest //检查会话是否失效
|-- PrepRequestProcessor
\-- processRequest //参数校验和根据需求创建事务
|-- ProposalRequestProcessor
\-- processRequest // 发起proposal
\-- //事务型请求
\-- SyncRequestProcessor
\-- processRequest // 将请求记录到事务日志中,如果有需要的话则触发快照
\-- AckRequestProcessor
\-- processRequest // 确认事务日志收集完成,对于Proposal的投票器进行ack反馈
\-- CommitProcessor
\-- processRequest // 等待集群内Proposal投票直到可被提交
\-- ToBeAppliedRequestProcessor
\-- processRequest // 存储已经被CommitProcessor处理过的可提交的Proposal——直到FinalRequestProcessor处理完后,才会将其移除
\-- FinalRequestProcessor
\-- processRequest // 回复请求,并改变内存数据库的状态
\-- //非事务型请求
\-- CommitProcessor
\-- processRequest // skip,只处理非事务型请求
\-- ToBeAppliedRequestProcessor
\-- processRequest // skip,配合CommitProcessor一起工作
\-- FinalRequestProcessor
\-- processRequest // 回复请求,并改变内存数据库的状态
//处理 client的请求
|-- FollowerRequestProcessor
\-- processRequest //事务的话调用CommitProcessor,并发送给leader;不然直接到FinalProcessor
|-- CommitProcessor
\-- processRequest // 等待集群内Proposal投票直到可被提交
|-- FinalProcessor
\-- processRequest // 回复请求,并改变内存数据库的状态
//专门用来处理 leader发起的proposal
|-- SyncRequestProcessor
| \-- processRequest // 将请求记录到事务日志中,如果有需要的话则触发快照
|-- SendAckRequestProcessor
\-- processRequest // ack给proposal发起者,表示自身完成了日志的记录
//处理 client的请求
|-- ObserverRequestProcessor
\-- processRequest //和FollowerRequestProcessor代码几乎一模一样:事务的话调用CommitProcessor,并发送给leader;不然直接到FinalProcessor
|-- CommitProcessor
\-- processRequest // 等待集群内Proposal投票直到可被提交
|-- FinalProcessor
\-- processRequest // 回复请求,并改变内存数据库的状态
下面的源码分析基于
3.5.7
版本。
@Override
protected void setupRequestProcessors() {
RequestProcessor finalProcessor = new FinalRequestProcessor(this);
RequestProcessor toBeAppliedProcessor = new Leader.ToBeAppliedRequestProcessor(finalProcessor, getLeader());
commitProcessor = new CommitProcessor(toBeAppliedProcessor,
Long.toString(getServerId()), false,
getZooKeeperServerListener());
commitProcessor.start();
ProposalRequestProcessor proposalProcessor = new ProposalRequestProcessor(this,
commitProcessor);
proposalProcessor.initialize();
prepRequestProcessor = new PrepRequestProcessor(this, proposalProcessor);
prepRequestProcessor.start();
firstProcessor = new LeaderRequestProcessor(this, prepRequestProcessor);
setupContainerManager();
}
@Override
public void processRequest(Request request)
throws RequestProcessorException {
// Check if this is a local session and we are trying to create
// an ephemeral node, in which case we upgrade the session
Request upgradeRequest = null;
try {
upgradeRequest = lzks.checkUpgradeSession(request);
} catch (KeeperException ke) {
if (request.getHdr() != null) {
LOG.debug("Updating header");
request.getHdr().setType(OpCode.error);
request.setTxn(new ErrorTxn(ke.code().intValue()));
}
request.setException(ke);
LOG.info("Error creating upgrade request " + ke.getMessage());
} catch (IOException ie) {
LOG.error("Unexpected error in upgrade", ie);
}
if (upgradeRequest != null) {
nextProcessor.processRequest(upgradeRequest);
}
nextProcessor.processRequest(request);
}
这段逻辑很清楚。因需要检查会话是否过期,去创建一个临时节点。如果失败那么就抛出异常。
该类有1000多行代码,故此会挑出较为典型的代码进行剖析。在此之前,我们先看注释:
This request processor is generally at the start of a RequestProcessor change. It sets up any transactions associated with requests that change the
state of the system. It counts on ZooKeeperServer to update
outstandingRequests, so that it can take into account transactions that are
in the queue to be applied when generating a transaction.
简单来说,它一般位于请求处理链的头部,它会设置事务型请求(改变系统状态的请求)。
对于创建型请求逻辑大致为:
case OpCode.create2:
CreateRequest create2Request = new CreateRequest();
pRequest2Txn(request.type, zks.getNextZxid(), request, create2Request, true);
break;
跳往pRequest2Txn
。
protected void pRequest2Txn(int type, long zxid, Request request,
Record record, boolean deserialize)
throws KeeperException, IOException, RequestProcessorException
{
request.setHdr(new TxnHeader(request.sessionId, request.cxid, zxid,
Time.currentWallTime(), type));
switch (type) {
case OpCode.create:
case OpCode.create2:
case OpCode.createTTL:
case OpCode.createContainer: {
pRequest2TxnCreate(type, request, record, deserialize);
break;
}
//....多余代码不再展示
跳往pRequest2TxnCreate
:
private void pRequest2TxnCreate(int type, Request request, Record record, boolean deserialize) throws IOException, KeeperException {
if (deserialize) {
ByteBufferInputStream.byteBuffer2Record(request.request, record);
}
int flags;
String path;
List<ACL> acl;
byte[] data;
long ttl;
if (type == OpCode.createTTL) {
CreateTTLRequest createTtlRequest = (CreateTTLRequest)record;
flags = createTtlRequest.getFlags();
path = createTtlRequest.getPath();
acl = createTtlRequest.getAcl();
data = createTtlRequest.getData();
ttl = createTtlRequest.getTtl();
} else {
CreateRequest createRequest = (CreateRequest)record;
flags = createRequest.getFlags();
path = createRequest.getPath();
acl = createRequest.getAcl();
data = createRequest.getData();
ttl = -1;
}
CreateMode createMode = CreateMode.fromFlag(flags);
validateCreateRequest(path, createMode, request, ttl);
String parentPath = validatePathForCreate(path, request.sessionId);
List<ACL> listACL = fixupACL(path, request.authInfo, acl);
ChangeRecord parentRecord = getRecordForPath(parentPath);
checkACL(zks, parentRecord.acl, ZooDefs.Perms.CREATE, request.authInfo);
int parentCVersion = parentRecord.stat.getCversion();
if (createMode.isSequential()) {
path = path + String.format(Locale.ENGLISH, "%010d", parentCVersion);
}
validatePath(path, request.sessionId);
try {
if (getRecordForPath(path) != null) {
throw new KeeperException.NodeExistsException(path);
}
} catch (KeeperException.NoNodeException e) {
// ignore this one
}
boolean ephemeralParent = EphemeralType.get(parentRecord.stat.getEphemeralOwner()) == EphemeralType.NORMAL;
if (ephemeralParent) {
throw new KeeperException.NoChildrenForEphemeralsException(path);
}
int newCversion = parentRecord.stat.getCversion()+1;
if (type == OpCode.createContainer) {
request.setTxn(new CreateContainerTxn(path, data, listACL, newCversion));
} else if (type == OpCode.createTTL) {
request.setTxn(new CreateTTLTxn(path, data, listACL, newCversion, ttl));
} else {
request.setTxn(new CreateTxn(path, data, listACL, createMode.isEphemeral(),
newCversion));
}
StatPersisted s = new StatPersisted();
if (createMode.isEphemeral()) {
s.setEphemeralOwner(request.sessionId);
}
parentRecord = parentRecord.duplicate(request.getHdr().getZxid());
parentRecord.childCount++;
parentRecord.stat.setCversion(newCversion);
addChangeRecord(parentRecord);
addChangeRecord(new ChangeRecord(request.getHdr().getZxid(), path, s, 0, listACL));
}
大致可以总结下逻辑:
outstandingChanges
队列事务型请求:
case OpCode.multi:
MultiTransactionRecord multiRequest = new MultiTransactionRecord();
try {
ByteBufferInputStream.byteBuffer2Record(request.request, multiRequest);
} catch(IOException e) {
request.setHdr(new TxnHeader(request.sessionId, request.cxid, zks.getNextZxid(),
Time.currentWallTime(), OpCode.multi));
throw e;
}
List<Txn> txns = new ArrayList<Txn>();
//Each op in a multi-op must have the same zxid!
long zxid = zks.getNextZxid();
KeeperException ke = null;
//Store off current pending change records in case we need to rollback
Map<String, ChangeRecord> pendingChanges = getPendingChanges(multiRequest);
for(Op op: multiRequest) {
Record subrequest = op.toRequestRecord();
int type;
Record txn;
/* If we've already failed one of the ops, don't bother
* trying the rest as we know it's going to fail and it
* would be confusing in the logfiles.
*/
if (ke != null) {
type = OpCode.error;
txn = new ErrorTxn(Code.RUNTIMEINCONSISTENCY.intValue());
}
/* Prep the request and convert to a Txn */
else {
try {
pRequest2Txn(op.getType(), zxid, request, subrequest, false);
type = request.getHdr().getType();
txn = request.getTxn();
} catch (KeeperException e) {
ke = e;
type = OpCode.error;
txn = new ErrorTxn(e.code().intValue());
if (e.code().intValue() > Code.APIERROR.intValue()) {
LOG.info("Got user-level KeeperException when processing {} aborting" +
" remaining multi ops. Error Path:{} Error:{}",
request.toString(), e.getPath(), e.getMessage());
}
request.setException(e);
/* Rollback change records from failed multi-op */
rollbackPendingChanges(zxid, pendingChanges);
}
}
//FIXME: I don't want to have to serialize it here and then
// immediately deserialize in next processor. But I'm
// not sure how else to get the txn stored into our list.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BinaryOutputArchive boa = BinaryOutputArchive.getArchive(baos);
txn.serialize(boa, "request") ;
ByteBuffer bb = ByteBuffer.wrap(baos.toByteArray());
txns.add(new Txn(type, bb.array()));
}
request.setHdr(new TxnHeader(request.sessionId, request.cxid, zxid,
Time.currentWallTime(), request.type));
request.setTxn(new MultiTxn(txns));
break;
代码虽然看起来很恶心,但是逻辑倒是挺简单的:
//All the rest don't need to create a Txn - just verify session
case OpCode.sync:
zks.sessionTracker.checkSession(request.sessionId,
request.getOwner());
break;
非事务型请求,校验一下session就可以发送至下一个Processor了。
对于事务请求会发起Proposal,并发送给CommitProcessor。而且ProposalRequestProcessor还会将事务请求交付给SyncRequestProcessor。
public void processRequest(Request request) throws RequestProcessorException {
// LOG.warn("Ack>>> cxid = " + request.cxid + " type = " +
// request.type + " id = " + request.sessionId);
// request.addRQRec(">prop");
/* In the following IF-THEN-ELSE block, we process syncs on the leader.
* If the sync is coming from a follower, then the follower
* handler adds it to syncHandler. Otherwise, if it is a client of
* the leader that issued the sync command, then syncHandler won't
* contain the handler. In this case, we add it to syncHandler, and
* call processRequest on the next processor.
*/
if (request instanceof LearnerSyncRequest){
zks.getLeader().processSync((LearnerSyncRequest)request);
} else {
nextProcessor.processRequest(request);
if (request.getHdr() != null) {
// We need to sync and get consensus on any transactions
try {
zks.getLeader().propose(request);
} catch (XidRolloverException e) {
throw new RequestProcessorException(e.getMessage(), e);
}
syncProcessor.processRequest(request);
}
}
}
接着看propose:
/**
* create a proposal and send it out to all the members
*
* @param request
* @return the proposal that is queued to send to all the members
*/
public Proposal propose(Request request) throws XidRolloverException {
/**
* Address the rollover issue. All lower 32bits set indicate a new leader
* election. Force a re-election instead. See ZOOKEEPER-1277
*/
if ((request.zxid & 0xffffffffL) == 0xffffffffL) {
String msg =
"zxid lower 32 bits have rolled over, forcing re-election, and therefore new epoch start";
shutdown(msg);
throw new XidRolloverException(msg);
}
byte[] data = SerializeUtils.serializeRequest(request);
proposalStats.setLastBufferSize(data.length);
QuorumPacket pp = new QuorumPacket(Leader.PROPOSAL, request.zxid, data, null);
Proposal p = new Proposal();
p.packet = pp;
p.request = request;
synchronized(this) {
p.addQuorumVerifier(self.getQuorumVerifier());
if (request.getHdr().getType() == OpCode.reconfig){
self.setLastSeenQuorumVerifier(request.qv, true);
}
if (self.getQuorumVerifier().getVersion()<self.getLastSeenQuorumVerifier().getVersion()) {
p.addQuorumVerifier(self.getLastSeenQuorumVerifier());
}
if (LOG.isDebugEnabled()) {
LOG.debug("Proposing:: " + request);
}
lastProposed = p.packet.getZxid();
outstandingProposals.put(lastProposed, p);
sendPacket(pp);
}
return p;
}
这次提交的记录是一个QuorumPacket
,其实现了Record
接口。指定了type为PROPOSAL。我们看一下注释:
/**
* This message type is sent by a leader to propose a mutation.
*/
public final static int PROPOSAL = 2;
很显然,这个只有Leader才可以发起的一种变化型请求。再简单描述下逻辑:
outstandingProposals
的Map里顾名思义,事务提交器。只关心事务请求——等待集群内Proposal投票直到可被提交。有了CommitProcessor,每个服务器都可以很好的对事务进行顺序处理。
该部分的代码实在简陋,故不占篇幅来分析。读者朋友知道上述信息后,也可以理解整个请求链是怎样的。
逻辑非常的简单,将请求记录到事务日志中,并尝试触发快照。
public void processRequest(Request request) {
// request.addRQRec(">sync");
queuedRequests.add(request);
}
//线程的核心方法,会对queuedRequests这个队列进行操作
@Override
public void run() {
try {
int logCount = 0;
// we do this in an attempt to ensure that not all of the servers
// in the ensemble take a snapshot at the same time
int randRoll = r.nextInt(snapCount/2);
while (true) {
Request si = null;
if (toFlush.isEmpty()) {
si = queuedRequests.take();
} else {
si = queuedRequests.poll();
if (si == null) {
flush(toFlush);
continue;
}
}
if (si == requestOfDeath) {
break;
}
if (si != null) {
// track the number of records written to the log
if (zks.getZKDatabase().append(si)) {
logCount++;
if (logCount > (snapCount / 2 + randRoll)) {
randRoll = r.nextInt(snapCount/2);
// roll the log
zks.getZKDatabase().rollLog();
// take a snapshot
if (snapInProcess != null && snapInProcess.isAlive()) {
LOG.warn("Too busy to snap, skipping");
} else {
snapInProcess = new ZooKeeperThread("Snapshot Thread") {
public void run() {
try {
zks.takeSnapshot();
} catch(Exception e) {
LOG.warn("Unexpected exception", e);
}
}
};
snapInProcess.start();
}
logCount = 0;
}
} else if (toFlush.isEmpty()) {
// optimization for read heavy workloads
// iff this is a read, and there are no pending
// flushes (writes), then just pass this to the next
// processor
if (nextProcessor != null) {
nextProcessor.processRequest(si);
if (nextProcessor instanceof Flushable) {
((Flushable)nextProcessor).flush();
}
}
continue;
}
toFlush.add(si);
if (toFlush.size() > 1000) {
flush(toFlush);
}
}
}
} catch (Throwable t) {
handleException(this.getName(), t);
} finally{
running = false;
}
LOG.info("SyncRequestProcessor exited!");
}
该处理器的核心为一个toBeApplied队列,专门用来存储那些已经被CommitProcessor处理过的可提交的Proposal——直到FinalRequestProcessor处理完后,才会将其移除。
/*
* (non-Javadoc)
*
* @see org.apache.zookeeper.server.RequestProcessor#processRequest(org.apache.zookeeper.server.Request)
*/
public void processRequest(Request request) throws RequestProcessorException {
next.processRequest(request);
// The only requests that should be on toBeApplied are write
// requests, for which we will have a hdr. We can't simply use
// request.zxid here because that is set on read requests to equal
// the zxid of the last write op.
if (request.getHdr() != null) {
long zxid = request.getHdr().getZxid();
Iterator<Proposal> iter = leader.toBeApplied.iterator();
if (iter.hasNext()) {
Proposal p = iter.next();
if (p.request != null && p.request.zxid == zxid) {
iter.remove();
return;
}
}
LOG.error("Committed request not found on toBeApplied: "
+ request);
}
}
篇幅原因,在这里简单的描述下逻辑:既然是最后一个处理器,那么需要回复相应的请求,并负责事务请求的生效——改变内存数据库的状态。
先看一下其组装Processors的代码:
@Override
protected void setupRequestProcessors() {
RequestProcessor finalProcessor = new FinalRequestProcessor(this);
commitProcessor = new CommitProcessor(finalProcessor,
Long.toString(getServerId()), true, getZooKeeperServerListener());
commitProcessor.start();
firstProcessor = new FollowerRequestProcessor(this, commitProcessor);
((FollowerRequestProcessor) firstProcessor).start();
syncProcessor = new SyncRequestProcessor(this,
new SendAckRequestProcessor((Learner)getFollower()));
syncProcessor.start();
}
可以看到,这里又两对儿请求链:
那么请求来的时候,是哪个Processor来handle呢?这边可以大致跟踪一下:
ZooKeeperServer
来调度,handle 请求logRequest
的入口进来。该类的由Learner
调度进来,handle leader的请求。看到这里有人就要问了,这明明是个Observer,怎么从Learner进来的呢?这就得看签名了:
/**
* This class is the superclass of two of the three main actors in a ZK
* ensemble: Followers and Observers. Both Followers and Observers share
* a good deal of code which is moved into Peer to avoid duplication.
*/
public class Learner {
为了避免重复代码,就把一些共同的代码抽取上来了。
Follower的正常处理器,会判断是不是事务,是事务就发送给Leader,不然自己处理。
FollowerRequestProcessor.run
@Override
public void run() {
try {
while (!finished) {
Request request = queuedRequests.take();
if (LOG.isTraceEnabled()) {
ZooTrace.logRequest(LOG, ZooTrace.CLIENT_REQUEST_TRACE_MASK,
'F', request, "");
}
if (request == Request.requestOfDeath) {
break;
}
// We want to queue the request to be processed before we submit
// the request to the leader so that we are ready to receive
// the response
nextProcessor.processRequest(request);
// We now ship the request to the leader. As with all
// other quorum operations, sync also follows this code
// path, but different from others, we need to keep track
// of the sync operations this follower has pending, so we
// add it to pendingSyncs.
switch (request.type) {
case OpCode.sync:
zks.pendingSyncs.add(request);
zks.getFollower().request(request);
break;
case OpCode.create:
case OpCode.create2:
case OpCode.createTTL:
case OpCode.createContainer:
case OpCode.delete:
case OpCode.deleteContainer:
case OpCode.setData:
case OpCode.reconfig:
case OpCode.setACL:
case OpCode.multi:
case OpCode.check:
zks.getFollower().request(request);
break;
case OpCode.createSession:
case OpCode.closeSession:
// Don't forward local sessions to the leader.
if (!request.isLocalSession()) {
zks.getFollower().request(request);
}
break;
}
}
} catch (Exception e) {
handleException(this.getName(), e);
}
LOG.info("FollowerRequestProcessor exited loop!");
}
而交付请求到CommitProcessor
的逻辑很迷,事务型消息应该提交到leader,所以不需要这么一个processor——该Processor在前文也说过,用于等待集群内Proposal投票直到可被提交。
public void processRequest(Request si) {
if(si.type != OpCode.sync){
QuorumPacket qp = new QuorumPacket(Leader.ACK, si.getHdr().getZxid(), null,
null);
try {
learner.writePacket(qp, false);
} catch (IOException e) {
LOG.warn("Closing connection to leader, exception during packet send", e);
try {
if (!learner.sock.isClosed()) {
learner.sock.close();
}
} catch (IOException e1) {
// Nothing to do, we are shutting things down, so an exception here is irrelevant
LOG.debug("Ignoring error closing the connection", e1);
}
}
}
}
逻辑非常的简单,用于反馈ACK
成功,表示自身完成了事务日志的记录。
/**
* Set up the request processors for an Observer:
* firstProcesor->commitProcessor->finalProcessor
*/
@Override
protected void setupRequestProcessors() {
// We might consider changing the processor behaviour of
// Observers to, for example, remove the disk sync requirements.
// Currently, they behave almost exactly the same as followers.
RequestProcessor finalProcessor = new FinalRequestProcessor(this);
commitProcessor = new CommitProcessor(finalProcessor,
Long.toString(getServerId()), true,
getZooKeeperServerListener());
commitProcessor.start();
firstProcessor = new ObserverRequestProcessor(this, commitProcessor);
((ObserverRequestProcessor) firstProcessor).start();
/*
* Observer should write to disk, so that the it won't request
* too old txn from the leader which may lead to getting an entire
* snapshot.
*
* However, this may degrade performance as it has to write to disk
* and do periodic snapshot which may double the memory requirements
*/
if (syncRequestProcessorEnabled) {
syncProcessor = new SyncRequestProcessor(this, null);
syncProcessor.start();
}
}
逻辑很清晰(大概是因为3.3.0后加入的代码吧),正常的请求链为:
如果syncRequestProcessorEnabled
开启的情况下(缺省为开),这意味着Observer也会去记录事务日志以及做快照,这会给下降一定的性能,以及更多的内存要求。
然后再看下ObserverRequestProcessor
,简直和FollowerRequestProcessor
如出一辙,有追求的工程师都会想办法复用代码。
@Override
public void run() {
try {
while (!finished) {
Request request = queuedRequests.take();
if (LOG.isTraceEnabled()) {
ZooTrace.logRequest(LOG, ZooTrace.CLIENT_REQUEST_TRACE_MASK,
'F', request, "");
}
if (request == Request.requestOfDeath) {
break;
}
// We want to queue the request to be processed before we submit
// the request to the leader so that we are ready to receive
// the response
nextProcessor.processRequest(request);
// We now ship the request to the leader. As with all
// other quorum operations, sync also follows this code
// path, but different from others, we need to keep track
// of the sync operations this Observer has pending, so we
// add it to pendingSyncs.
switch (request.type) {
case OpCode.sync:
zks.pendingSyncs.add(request);
zks.getObserver().request(request);
break;
case OpCode.create:
case OpCode.create2:
case OpCode.createTTL:
case OpCode.createContainer:
case OpCode.delete:
case OpCode.deleteContainer:
case OpCode.setData:
case OpCode.reconfig:
case OpCode.setACL:
case OpCode.multi:
case OpCode.check:
zks.getObserver().request(request);
break;
case OpCode.createSession:
case OpCode.closeSession:
// Don't forward local sessions to the leader.
if (!request.isLocalSession()) {
zks.getObserver().request(request);
}
break;
}
}
} catch (Exception e) {
handleException(this.getName(), e);
}
LOG.info("ObserverRequestProcessor exited loop!");
}
以上,就是源码分析部分,基于3.5.7
版本。
之前和大家过了一下源码,相信各位对ZK请求处理流程有了一定的了解。接下来,让我们理一理事务请求的过程。从Leader的ProposalRequestProcessor开始,大致会分为三个阶段,即:
主要由ProposalRequestProcessor
来做,通知参与proposql的机器(Leader和Follower)都要记录事务日志。
每个事务请求都要超过半数的投票认可(Leader + Follower)。
toBeApplied
队列中。(见org.apache.zookeeper.server.quorum.Leader.tryToCommit)COMMIT
,而Observer的为Inform
。这使它们提交该Proposal。(见org.apache.zookeeper.server.quorum.Leader.commit && inform)直到这里,算是完成了SyncRequestProcessor -> AckRequestProcessor
。
接下来讲CommitProcessor -> ToBeAppliedRequestProcessor -> FinalRequestProcessor
的过程。
committedRequests
这个队列中,然后一个个发送至ToBeAppliedRequestProcessor里去。toBeApplied
队列中移除这个Proposal。PrepRequestProcessor
时添加的。接着就是应用到内存数据库了,该内存数据库实例会维护一个默认上限为500的committedLog——存放最近成功的Proposal,便于快速同步。如果在该步骤服务器宕机,则会在机器拉起时通过proposal
阶段的预写日志进行数据订正,并通过PlayBackListener
同时将其转换成proposal,并保存到committedLog中,便于同步。
在这种实现下,我们可以看到,ZK其实牺牲了强一致性来提升一些可用性,而提供的是最终一致性。在集群间同步数据时,如果client
将请求发送到了未同步的服务器,则会读取到老数据。这在ZK的官网上也可以看到相关的提示(在参考资料中会附上连接):
Consistency Guarantees
ZooKeeper is a high performance, scalable service. Both reads and write operations are designed to be fast, though reads are faster than writes. The reason for this is that in the case of reads, ZooKeeper can serve older data, which in turn is due to ZooKeeper's consistency guarantees:
Sequential Consistency
Updates from a client will be applied in the order that they were sent.
...........
另外,ZK对于事务的处理方式有点像是二阶段提交(two-phase commit)。其实这就是ZAB算法,在下一篇文章里,我们会详细介绍其实现,并介绍它的另一个用途——分布式选举。
参考资料:
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。