k8s的所有组件的入口都在cmd目录下,列表如下:
OWNERS
clicheck
cloud-controller-manager
dependencycheck
gendocs
genkubedocs
genman
genswaggertypedocs
genutils
genyaml
importverifier
kube-apiserver
kube-controller-manager
kube-proxy
kube-scheduler
kubeadm
kubectl
kubectl-convert
kubelet
kubemark
linkcheck
preferredimports
kubectl目录下就一个文件:cmd/kubectl/kubectl.go,和docker一样,k8s也是用spf13的cobra做命令行参数解析组装的:
command := cmd.NewDefaultKubectlCommand()
pflag.CommandLine.SetNormalizeFunc(cliflag.WordSepNormalizeFunc)
pflag.CommandLine.AddGoFlagSet(goflag.CommandLine)
入口没有什么逻辑,主要逻辑放在了vendor/k8s.io/kubectl/pkg/cmd/cmd.go这个文件里,主要调用了两个函数
NewDefaultKubectlCommandWithArgs()
HandlePluginCommand(pluginHandler, cmdPathPieces)
1,NewDefaultKubectlCommandWithArgs
cmd := NewKubectlCommand(in, out, errout)
f := cmdutil.NewFactory(matchVersionKubeConfigFlags)
proxyCmd := proxy.NewCmdProxy(f, ioStreams)
groups := templates.CommandGroups
groups.Add(cmds)
templates.ActsAsRootCommand(cmds, filters, groups...)
创建cmd然后放入CommandGroups,最后用模板统一处理,CommandGroups对命令做了分类,分类里面是详细的cmd
Beginner
create.NewCmdCreate(f, ioStreams),
expose.NewCmdExposeService(f, ioStreams),
run.NewCmdRun(f, ioStreams),
set.NewCmdSet(f, ioStreams),
Intermediate
explain.NewCmdExplain("kubectl", f, ioStreams),
get.NewCmdGet("kubectl", f, ioStreams),
edit.NewCmdEdit(f, ioStreams),
delete.NewCmdDelete(f, ioStreams),
Deploy
rollout.NewCmdRollout(f, ioStreams),
scale.NewCmdScale(f, ioStreams),
autoscale.NewCmdAutoscale(f, ioStreams),
Cluster
certificates.NewCmdCertificate(f, ioStreams),
clusterinfo.NewCmdClusterInfo(f, ioStreams),
top.NewCmdTop(f, ioStreams),
drain.NewCmdCordon(f, ioStreams),
drain.NewCmdUncordon(f, ioStreams),
drain.NewCmdDrain(f, ioStreams),
taint.NewCmdTaint(f, ioStreams),
Troubleshooting
describe.NewCmdDescribe("kubectl", f, ioStreams),
logs.NewCmdLogs(f, ioStreams),
attach.NewCmdAttach(f, ioStreams),
cmdexec.NewCmdExec(f, ioStreams),
portforward.NewCmdPortForward(f, ioStreams),
proxyCmd,
cp.NewCmdCp(f, ioStreams),
auth.NewCmdAuth(f, ioStreams),
debug.NewCmdDebug(f, ioStreams),
Advanced
diff.NewCmdDiff(f, ioStreams),
apply.NewCmdApply("kubectl", f, ioStreams),
patch.NewCmdPatch(f, ioStreams),
replace.NewCmdReplace(f, ioStreams),
wait.NewCmdWait(f, ioStreams),
kustomize.NewCmdKustomize(ioStreams),
Settings
label.NewCmdLabel(f, ioStreams),
annotate.NewCmdAnnotate("kubectl", f, ioStreams),
completion.NewCmdCompletion(ioStreams.Out, ""),
这里面就是我们kubectl用到的所有命令,分为下面几类:基础的、中级的、部署相关、集群相关、排查问题相关、高级的、设置相关的。这是一个很好的思路,本来繁琐的命令通过分类变得清晰明了,就像一个个收纳盒,让杂乱无章的东西迅速变得井然有序。
2,HandlePluginCommand
pluginHandler.Execute(foundBinaryPath, cmdArgs[len(remainingArgs):], os.Environ())
以上是kubectl 基础命令的整体框架,下面已kubectl proxy 和kubectl get两个命令为例进行详细介绍:
1.kubectl proxy
实现了一个http proxy,底层其实是调用了golang源码里封装的一个反向代理函数httputil.NewSingleHostReverseProxy,入口代码在
vendor/k8s.io/kubectl/pkg/cmd/proxy/proxy.go中:
o := NewProxyOptions(ioStreams)
cmdutil.CheckErr(o.Complete(f))
cmdutil.CheckErr(o.Validate())
cmdutil.CheckErr(o.RunProxy())
其他的命令也都是这个套路:获取选项参数,补全完整命令,验证命令的合法性,运行对应的指令。
首先看下参数校验:
AcceptPaths: proxy.MakeRegexpArrayOrDie(o.acceptPaths),
RejectPaths: proxy.MakeRegexpArrayOrDie(o.rejectPaths),
AcceptHosts: proxy.MakeRegexpArrayOrDie(o.acceptHosts),
RejectMethods: proxy.MakeRegexpArrayOrDie(o.rejectMethods),
然后看下命令的运行:
server, err := proxy.NewServer(o.staticDir, o.apiPrefix, o.staticPrefix, o.filter, o.clientConfig, o.keepalive)
l, err = server.Listen(o.address, o.port)
l, err = server.ListenUnix(o.unixSocket)
return server.ServeOnListener(l)
启动了一个proxy serve,然后以监听unixSocket的方式运行,所有接受到的请求都被反向代理到后端。 proxy_server的实现代码在:vendor/k8s.io/kubectl/pkg/proxy/proxy_server.go重点关注下三个函数,创建server、设置处理handler、侦听端口。
func NewServer(filebase string, apiProxyPrefix string, staticPrefix string, filter *FilterServer, cfg *rest.Config, keepalive time.Duration) (*Server, error)
proxyHandler, err := NewProxyHandler(apiProxyPrefix, filter, cfg, keepalive)
mux := http.NewServeMux()
mux.Handle(apiProxyPrefix, proxyHandler)
func (s *Server) ServeOnListener(l net.Listener) error
server.Serve(l)
func NewProxyHandler(apiProxyPrefix string, filter *FilterServer, cfg *rest.Config, keepalive time.Duration) (http.Handler, error)
target, err := url.Parse(host)
transport, err := rest.TransportFor(cfg)
upgradeTransport, err := makeUpgradeTransport(cfg, keepalive)
proxy := proxy.NewUpgradeAwareHandler(target, transport, false, false, responder)
proxyServer := http.Handler(proxy)
其实代码和一个普通的http server 是类似的。可以看到NewProxyHandler是对NewUpgradeAwareHandler的一个简单封装,具体实现代码的位置在:
vendor/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go
func NewUpgradeAwareHandler(location *url.URL, transport http.RoundTripper, wrapTransport, upgradeRequired bool, responder ErrorResponder) *UpgradeAwareHandler
ServeHTTP函数里面其实就是调用了 httputil.NewSingleHostReverseProxy
func (h *UpgradeAwareHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)
proxy := httputil.NewSingleHostReverseProxy(&url.URL{Scheme: h.Location.Scheme, Host: h.Location.Host})
proxy.ServeHTTP(w, newReq)
一句话总结,kubectl proxy 其实就是一个反向代理工具。
2,kubectl get
get的代码实现位置是vendor/k8s.io/kubectl/pkg/cmd/get/get.go 初始化cmd的套路是一样的
func NewCmdGet(parent string, f cmdutil.Factory, streams genericclioptions.IOStreams) *cobra.Command
o := NewGetOptions(parent, streams)
cmdutil.CheckErr(o.Complete(f, cmd, args))
cmdutil.CheckErr(o.Validate(cmd))
cmdutil.CheckErr(o.Run(f, cmd, args))
重点看下run函数
func (o *GetOptions) Run(f cmdutil.Factory, cmd *cobra.Command, args []string) error
restClient, err := f.RESTClient()
return rawhttp.RawGet(restClient, o.IOStreams, o.Raw)
return o.watch(f, cmd, args)
r := f.NewBuilder().
Unstructured().
NamespaceParam(o.Namespace).DefaultNamespace().AllNamespaces(o.AllNamespaces).
FilenameParam(o.ExplicitNamespace, &o.FilenameOptions).
LabelSelectorParam(o.LabelSelector).
FieldSelectorParam(o.FieldSelector).
RequestChunksOf(chunkSize).
ResourceTypeOrNameArgs(true, args...).
ContinueOnError().
Latest().
Flatten().
TransformRequests(o.transformRequests).
Do()
里面通过restfull的http client的get方法来获取数据。restfull client创建相关的代码如下:
vendor/k8s.io/kubectl/pkg/cmd/util/factory.go
type Factory interface {}
RESTClient() (*restclient.RESTClient, error)
vendor/k8s.io/kubectl/pkg/cmd/util/factory_client_access.go
func (f *factoryImpl) RESTClient() (*restclient.RESTClient, error)
clientConfig, err := f.ToRESTConfig()
return restclient.RESTClientFor(clientConfig)
vendor/k8s.io/client-go/rest/config.go
func RESTClientFor(config *Config) (*RESTClient, error)
baseURL, versionedAPIPath, err := defaultServerUrlFor(config)
transport, err := TransportFor(config)
restClient, err := NewRESTClient(baseURL, versionedAPIPath, clientContent, rateLimiter, httpClient)
vendor/k8s.io/client-go/rest/client.go
func NewRESTClient(baseURL *url.URL, versionedAPIPath string, config ClientContentConfig, rateLimiter flowcontrol.RateLimiter, client *http.Client) (*RESTClient, error)
type RESTClient struct {}
其实kubectl 仅仅是对kubeapiserver提供的接口做了命令行的封装,本质上是一个client,这和docker源码的实现是一个思路。
本文分享自 golang算法架构leetcode技术php 微信公众号,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文参与 腾讯云自媒体同步曝光计划 ,欢迎热爱写作的你一起参与!