Loading [MathJax]/jax/output/CommonHTML/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >Nature | 5月14日新发表:空间转录组文章复现(三)

Nature | 5月14日新发表:空间转录组文章复现(三)

作者头像
天意生信云
发布于 2025-06-08 08:37:32
发布于 2025-06-08 08:37:32
3600
代码可运行
举报
运行总次数:0
代码可运行

今天我们接着复现上周分享的文章《Spatial transcriptomics reveals human cortical layer and area specification》Figure3的内容,Figure3展示了动态变化的皮层层级基因表达。

原文链接:

https://www.nature.com/articles/s41586-025-09010-1#Sec2

数据下载地址:

https://zenodo.org/records/14422018

代码下载地址:

https://github.com/carsen-stringer/vizgen-postprocessing

动态变化的皮层层级基因表达

图片
图片

基于构建的皮层深度(CD)分析框架,作者首先评估了在人类成人皮层中鉴定出的标记基因在胎儿皮层中的层级表达模式,并发现其表达存在显著差异(图 a)。接着,筛选出具有层级依赖性富集表达的基因,并对其在皮层板(CP)中的层级表达进行了量化分析(图b)。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import scanpy as sc
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from multiprocessing import Pool

adata = sc.read('merscope_integrated_855.h5ad')
adata.obs_names_make_unique()
adata.X = np.exp(adata.X.toarray()) - 1
sc.pp.normalize_total(adata)

def find_genes(sample, region, area):
  adata1 = adata[(adata.obs['sample']==sample) & (adata.obs.region==region) & (adata.obs.area==area) & (~adata.obs['cortical_depth'].isna()) & (adata.obs.H1_annotation.isin(['EN-ET', 'EN-IT', 'EN-Mig']))].copy() 
  ge = adata1.X
  ge1 = ge.round()
  ge1 = ge1.astype('int')
#dist_all = [np.concatenate([adata1.obs.cortical_depth[i]*np.ones(ge1[:,j][i]) for i in range(len(adata1.obs.cortical_depth)) if ge1[:,j][i]>0]) for j in range(ge1.shape[1])]
  dist_all = [np.concatenate([adata1.obs.cortical_depth.iloc[i]*np.ones(ge1[:,j][i]) for i in range(len(adata1.obs.cortical_depth)) if ge1[:,j][i]>0]) for j in range(ge1.shape[1])]
  dist40 = [dist_all[i] for i in np.array([np.quantile(j,0.75)-np.quantile(j,0.25) for j in dist_all]).argsort()[:40]]
  genes = adata1.var.index[np.array([np.quantile(i,0.75)-np.quantile(i,0.25) for i in dist_all]).argsort()[:40]]
  dict1 = dict(zip(genes, dist40))
  genes1 = [[i]*len(dict1[i]) for i in dict1.keys()]
  genes1 = [x for xs in genes1 for x in xs]
  df1 = pd.DataFrame(genes1)
  df1.columns = ['gene']
#df1['cortical_depth'] = np.hstack(dict1.values())
  df1['cortical_depth'] = np.hstack(list(dict1.values()))
  genes2 = list(df1.groupby('gene').aggregate('median').sort_values(by='cortical_depth').index)
return genes2

def make_violin(sample, region, area, genes):
  adata1 = adata[(adata.obs['sample']==sample) & (adata.obs.region==region) & (adata.obs.area==area) & (~adata.obs['cortical_depth'].isna()) & (adata.obs.H1_annotation.isin(['EN-ET', 'EN-IT', 'EN-Mig']))].copy() 
  adata1 = adata1[:,genes].copy()
  ge = adata1.X
  ge1 = ge.round()
  ge1 = ge1.astype('int')
  dist40 = [np.concatenate([adata1.obs.cortical_depth.iloc[i]*np.ones(ge1[:,j][i]) for i in range(len(adata1.obs.cortical_depth)) if ge1[:,j][i]>0]) for j in range(ge1.shape[1])]
#dist40 = [np.concatenate([adata1.obs.cortical_depth[i]*np.ones(ge1[:,j][i]) for i in range(len(adata1.obs.cortical_depth)) if ge1[:,j][i]>0]) for j in range(ge1.shape[1])]
#dist40 = [dist_all[i] for i in
#dist40 = [dist_all[i] for i in np.array([np.quantile(j,0.75)-np.quantile(j,0.25) for j in dist_all]).argsort()[:40]]  
#genes = adata1.var.index[np.array([np.quantile(i,0.75)-np.quantile(i,0.25) for i in dist_all]).argsort()[:40]]
  dict1 = dict(zip(genes, dist40))
  genes1 = [[i]*len(dict1[i]) for i in dict1.keys()]
  genes1 = [x for xs in genes1 for x in xs]
  df1 = pd.DataFrame(genes1)
  df1.columns = ['gene']
#df1['cortical_depth'] = np.hstack(dict1.values())
  df1['cortical_depth'] = np.hstack(list(dict1.values()))
  layer_types = ['EN-L2-1', 'EN-IT-L3-A', 'EN-IT-L4-1', 'EN-ET-L5-1', 'EN-IT-L6-1']
  l2_3 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[0]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.75))/2
  l3_4 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.75))/2
  l4_5 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.75))/2
  l5_6 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[4]].cortical_depth, 0.75))/2
  l = [l2_3,l3_4,l4_5,l5_6]
  order = genes
  plt.figure(figsize=(25,5));
  plot = sns.violinplot(x='gene', y='cortical_depth', hue='gene', data=df1, order=order, density_norm='width', inner = None, dodge=False, cut=0); plot.legend().remove();
  [plot.axhline(i, linestyle = '--') for i in l];
  plt.xticks(rotation=90, fontsize=9); plt.yticks(fontsize=9); plot.set(xlabel=None); plot.set_ylabel('Cortical Depth', fontsize=20); plt.ylim(0,1); plt.tight_layout();
  plt.savefig(sample + '_' + region + '_' + area + '_gene_violin.png', dpi=200, pad_inches=0)
#plt.savefig(sample + '_' + region + '_' + area + '_gene_violin.png', dpi=200, bbox_to_inches = 'tight', pad_inches=0)
#plt.show()

def main():
  genes = find_genes('FB123', 'F1', 'A-PFC')
  make_violin('FB123', 'F1', 'A-PFC', genes)
  make_violin('FB123', 'O2', 'A-Occi', genes)

if __name__=="__main__":
    main()

我使用的python版本是3.13.0,在执行过程中有一些数据结构的问题需要调整,例如:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
df1['cortical_depth'] = np.hstack(dict1.values())
dict1.values() 

dict1.values()返回的对象并不是一个序列类型,而是一个dict_values对象,这可能是导致 np.hstack() 报错的原因。np.hstack() 需要接收一个列表或元组作为输入,而dict_values并不直接满足这一要求。修改:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
df1['cortical_depth'] = np.hstack(list(dict1.values()))
图片
图片

CBLN2 是一种突触组织因子,其在类人猿中特有的视黄酸反应性增强子中具有变异,在 妊娠第22至34周(GW22–GW34) 表现出在第2和第3层中强烈的额叶富集表达。此外,该基因在第6层中的低水平表达在 GW15–GW34 各皮层区域中均得以维持(图c,d)。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import numpy as np
import scanpy as sc
import matplotlib.pyplot as plt
from multiprocessing import Pool
from matplotlib_scalebar.scalebar import ScaleBar
from matplotlib.colors import Normalize  # 添加这个导入
from itertools import repeat
import matplotlib

adata = sc.read('merscope_integrated_855.h5ad')
adata.obs_names_make_unique()

def make_plot(sample, region, gene):
    adata1 = adata[(adata.obs['sample'] == sample) & (adata.obs['region'] == region)].copy()
    sc.pp.scale(adata1, zero_center=True, max_value=6)

    fig, ax = plt.subplots(figsize=(4, 4))
    # 设置 colormap 和 normalization
    vmin = adata1[:, gene].X.min()
    vmax = adata1[:, gene].X.max()
    norm = Normalize(vmin=vmin, vmax=vmax)

    # 绘图:注意 scanpy 默认会使用 `plt.gca()`,所以这里需要强制传入 ax
    sc.pl.embedding(
        adata1,
        basis="spatial",
        use_raw=False,
        color=gene,
        show=False,
        s=2,
        color_map=cmap,
        alpha=1,
        colorbar_loc=None,
        ax=ax
    )

    # 添加 colorbar
    sm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
    sm.set_array([])
    fig.colorbar(sm, ax=ax, location='top', orientation='horizontal', label=gene, shrink=0.3)

    # 添加 scalebar
    scalebar = ScaleBar(1, "um", fixed_value=500, location='lower right')
    ax.add_artist(scalebar)

    ax.axis("off")
    ax.set_aspect("equal")
    fig.tight_layout()
    fig.savefig(f"{sample}_{region}_{gene}.png", dpi=500)
    plt.close(fig)

samples = ['FB123-F1', 'FB123-F2', 'FB123-P1', 'FB123-O2', 'FB121-F1', 'UMB1117-F1a', 'UMB5900-BA9', 'FB080-O1c']
genes = ['CBLN2', 'SRM', 'RASGRF2', 'B3GALT2', 'SYT6', 'ETV1', 'PENK', 'CUX2', 'GLRA3', 'CUX1', 'RORB', 'PTK2B', 'FOXP1', 'TOX', 'FOXP2', 'TLE4','CALN1', 'SYBU', 'CPNE8', 'CYP26A1', 'STK32B', 'VSTM2L', 'CRYM', 'GNAL', 'PCDH17', 'FSTL5', 'NEUROG2', 'SLN', 'SOX5', 'TAFA1', 'ARFGEF3', 'OPCML', 'NEFM', 'NFIB', 'PPP3CA','B3GNT2', 'SORCS1', 'TRPM3', 'LPL']
cmap = 'YlGnBu'

def main():
    for sample in samples:
        with Pool(8) as pool:
            pool.starmap(make_plot, zip(repeat(sample.split('-')[0]), repeat(sample.split('-')[1]), genes))

if __name__ == "__main__":
    main()
图片
图片
图片
图片
图片
图片

一组基因在第2和第3层中表现出显著的额叶富集表达,但它们的峰值表达时间各不相同(图 e),这提示额叶第2和第3层的特化过程伴随着时间动态的转录变化。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import scanpy as sc
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import compress
from multiprocessing import Pool

adata = sc.read('merscope_integrated_855.h5ad')
adata.obs_names_make_unique()
adata.X = np.exp(adata.X.toarray()) - 1
sc.pp.normalize_total(adata)

gw15 = ['UMB1367', 'UMB1117']
gw20 = ['FB080', 'FB121']
gw22 = ['FB123']
gw34 = ['UMB5900']


adata = adata[~(((adata.obs['sample'] == 'FB080') & (adata.obs.region=='O1b')) | ((adata.obs['sample'] == 'UMB1117') & (adata.obs.region=='O1')) | ((adata.obs['sample'] == 'UMB1367') & (adata.obs.region=='O1') & (adata.obs.area=='C-V2')))].copy()

adata.obs['area'] = [i.split('-')[1] if isinstance(i, str) and '-'in i else i for i in adata.obs['area']]


def mean_exp(adata, section, area, layers):
  layer_dict = dict(zip(layers, [np.nan]*len(layers)))
  sample1 = section.split('_')[0]
  region = section.split('_')[1]
  adata2 = adata[(adata.obs['sample']==sample1) & (adata.obs.region==region) & (adata.obs.area==area)].copy()
for layer in layers:
    adata3 = adata2[adata2.obs.layer==layer].copy()
    if sum(adata2.obs.layer==layer)>0:
      layer_dict[layer] = adata3.X.mean()
    else:
      layer_dict[layer] = np.nan
return layer_dict


def cp_annotation(section, area):
      obs = pd.read_csv(image+'_obs_cp.csv', index_col = 0)
      obs.cp_dist = np.sqrt(obs.cp_dist)
      adata1 = adata[(adata.obs['sample']==sample1) & (adata.obs.region==region)].copy()
      if section.split('_')[0] in gw15:
        cp_layers = ['l4','l5','l6']
        layer_types = ['EN-IT-L4-1', 'EN-ET-L5-1', 'EN-IT-L6-1']
        l4_5 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[0]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.75))/2
        l5_6 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.75))/2
        l = [l4_5,l5_6,0]
      elif section.split('_')[0] in gw20:
        cp_layers = ['l3','l4','l5','l6']
        layer_types = ['EN-IT-L2/3-A1', 'EN-IT-L4-1', 'EN-ET-L5-1', 'EN-IT-L6-1']
        l3_4 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[0]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.75))/2
        l4_5 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.75))/2
        l5_6 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.75))/2
        l = [l3_4,l4_5,l5_6,0]
      elif section.split('_')[0] in gw22:
        cp_layers = ['l2','l3','l4','l5','l6']
        layer_types = ['EN-L2-1', 'EN-IT-L3-A', 'EN-IT-L4-1', 'EN-ET-L5-1', 'EN-IT-L6-1']
        l2_3 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[0]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.75))/2
        l3_4 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.75))/2
        l4_5 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.75))/2
        l5_6 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[4]].cortical_depth, 0.75))/2
        l = [l2_3,l3_4,l4_5,l5_6,0]
      elif section.split('_')[0] in gw34:
        cp_layers = ['l2','l3','l4','l5','l6']
        layer_types = ['EN-L2-4', 'EN-IT-L3-late', 'EN-IT-L4-late', 'EN-ET-L5-1', 'EN-IT-L6-late']
        l2_3 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[0]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.75))/2
        l3_4 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[1]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.75))/2
        l4_5 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[2]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.75))/2
        l5_6 = (np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[3]].cortical_depth, 0.25)+np.quantile(adata1.obs[adata1.obs.H3_annotation==layer_types[4]].cortical_depth, 0.75))/2
        l = [l2_3,l3_4,l4_5,l5_6,0]
      layer_ann = [cp_layers[np.where((i-l)>=0)[0][0]] for i in np.array(obs.cortical_depth)]
      layer_list = ['l2','l3','l4','l5','l6']
      [np.save(section+'_'+area+'_'+j+'_index.npy', np.array(list(compress(obs.index, [i==j for i in layer_ann])))) for j in layer_list]


dict1 = {
    f"{sample}_{region}": list(set(areas.dropna()))
    for (sample, region), areas in adata.obs.groupby(['sample', 'region'])['area']
}


adata.obs['layer'][(adata.obs['layer'].isin(['l2', 'l3', 'l4', 'l5', 'l6'])) & (~adata.obs['H1_annotation'].isin(['EN-IT', 'EN-Mig', 'EN-ET']))] = np.nan


order = ['PFC', 'PMC', 'M1', 'S1', 'Par', 'Temp', 'Occi', 'V2', 'V1']


sections_all = [[i]*len(dict1[i]) for i in dict1.keys()]
sections_all = [x for xs in sections_all for x in xs]

def make_heatmap(gene):
print(gene)
  adata1 = adata[:,gene].copy()
  layers = ['l2', 'l3', 'l4', 'l5', 'l6', 'sp', 'iz', 'osvz', 'isvz', 'vz']
#layers = ['mz', 'l2', 'l3', 'l4', 'l5', 'l6', 'sp', 'iz', 'osvz', 'isvz', 'vz']
  sections = list(compress(sections_all, [sum([i.startswith(j) for j in gw15]) for i in sections_all]))
  _, idx = np.unique(sections, return_index = True)
  sections_unique = list(np.array(sections)[np.sort(idx)])
  images = [dict1[i] for i in sections_unique]
  images = [x for xs in images for x in xs]
#[cp_annotation(i,j) for i,j in zip(sections, images)]
  gw15_dict = [mean_exp(adata1,i,j,layers) for i,j in zip(sections, images)]
  gw15_df = pd.DataFrame(gw15_dict).transpose()
  images = [i.split('-')[1] if (len(i.split('-')) > 1) else i for i in images]
  gw15_df.columns = images
  images_ordered = [i for j in order for i in images if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw15_df = gw15_df[images_ordered]
#gw15_df = (gw15_df - gw15_df.min().min()) /(gw15_df.max().max() - gw15_df.min().min())
#sns.heatmap(gw15_df); plt.show()
#dir='/Users/kylecoleman/data/walsh/all/clustering2/fig3/gw20'
  sections = list(compress(sections_all, [sum([i.startswith(j) for j in gw20]) for i in sections_all]))
  _, idx = np.unique(sections, return_index = True)
  sections_unique = list(np.array(sections)[np.sort(idx)])
  images = [dict1[i] for i in sections_unique]
  images = [x for xs in images for x in xs]
#[cp_annotation(i,j) for i,j in zip(sections, images)]
  gw20_dict = [mean_exp(adata1,i,j,layers) for i,j in zip(sections, images)]
  gw20_df = pd.DataFrame(gw20_dict).transpose()
  images = [i.split('-')[1] if (len(i.split('-')) > 1) else i for i in images]
  gw20_df.columns = images
  images_ordered = [i for j in order for i in images if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw20_df = gw20_df[images_ordered]
#gw20_df = (gw20_df - gw20_df.min().min()) /(gw20_df.max().max() - gw20_df.min().min())
#sns.heatmap(gw20_df); plt.show()
#dir='/Users/kylecoleman/data/walsh/all/clustering2/fig3/gw22'
  sections = list(compress(sections_all, [sum([i.startswith(j) for j in gw22]) for i in sections_all]))
  _, idx = np.unique(sections, return_index = True)
  sections_unique = list(np.array(sections)[np.sort(idx)])
  images = [dict1[i] for i in sections_unique]
  images = [x for xs in images for x in xs]
#[cp_annotation(i,j) for i,j in zip(sections, images)]
  gw22_dict = [mean_exp(adata1,i,j,layers) for i,j in zip(sections, images)]
  gw22_df = pd.DataFrame(gw22_dict).transpose()
  images = [i.split('-')[1] if (len(i.split('-')) > 1) else i for i in images]
  gw22_df.columns = images
  images_ordered = [i for j in order for i in images if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw22_df = gw22_df[images_ordered]
#gw22_df = (gw22_df - gw22_df.min().min()) /(gw22_df.max().max() - gw22_df.min().min())
#sns.heatmap(gw22_df); plt.show()
  layers = ['l2', 'l3', 'l4', 'l5', 'l6']
#dir='/Users/kylecoleman/data/walsh/all/clustering2/fig3/gw34'
  sections = list(compress(sections_all, [sum([i.startswith(j) for j in gw34]) for i in sections_all]))
  _, idx = np.unique(sections, return_index = True)
  sections_unique = list(np.array(sections)[np.sort(idx)])
  images = [dict1[i] for i in sections_unique]
  images = [x for xs in images for x in xs]
#[cp_annotation(i,j) for i,j in zip(sections, images)]
  gw34_dict = [mean_exp(adata1,i,j,layers) for i,j in zip(sections, images)]
  gw34_df = pd.DataFrame(gw34_dict).transpose()
  images = [i.split('-')[1] if (len(i.split('-')) > 1) else i for i in images]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw34_df.columns = images
  images_ordered = [i for j in order for i in images if i==j]
  gw34_df = gw34_df[images_ordered]
#gw34_df = (gw34_df - gw34_df.min().min()) /(gw34_df.max().max() - gw34_df.min().min())
  gw34_df = pd.concat((gw34_df, pd.DataFrame(np.nan, index = ['sp', 'iz', 'osvz', 'isvz', 'vz'], columns = images)))
#sns.heatmap(gw34_df); plt.show()
  grid_max = max(gw15_df.max().max(), gw20_df.max().max(), gw22_df.max().max(), gw34_df.max().max())
  grin_min = min(gw15_df.min().min(), gw20_df.min().min(), gw22_df.min().min(), gw34_df.min().min())
  gw15_df = (gw15_df - grin_min) /(grid_max - grin_min)
  gw20_df = (gw20_df - grin_min) /(grid_max - grin_min)
  gw22_df = (gw22_df - grin_min) /(grid_max - grin_min)
  gw34_df = (gw34_df - grin_min) /(grid_max - grin_min)
  gw15_df = gw15_df.groupby(level=0, axis=1).agg(np.mean)
  gw20_df = gw20_df.groupby(level=0, axis=1).agg(np.mean)
  gw22_df = gw22_df.groupby(level=0, axis=1).agg(np.mean)
  gw34_df = gw34_df.groupby(level=0, axis=1).agg(np.mean)
  images_ordered = [i for j in order for i in gw15_df.columns if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw15_df = gw15_df[images_ordered]
  images_ordered = [i for j in order for i in gw20_df.columns if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw20_df = gw20_df[images_ordered]
  images_ordered = [i for j in order for i in gw22_df.columns if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw22_df = gw22_df[images_ordered]
  images_ordered = [i for j in order for i in gw34_df.columns if i==j]
  _, idx = np.unique(images_ordered, return_index = True)
  images_ordered = list(np.array(images_ordered)[np.sort(idx)])
  gw34_df = gw34_df[images_ordered]
  grid_max = max(gw15_df.max().max(), gw20_df.max().max(), gw22_df.max().max(), gw34_df.max().max())
  grin_min = min(gw15_df.min().min(), gw20_df.min().min(), gw22_df.min().min(), gw34_df.min().min())
  gw15_df = (gw15_df - grin_min) /(grid_max - grin_min)
  gw20_df = (gw20_df - grin_min) /(grid_max - grin_min)
  gw22_df = (gw22_df - grin_min) /(grid_max - grin_min)
  gw34_df = (gw34_df - grin_min) /(grid_max - grin_min)
  fig, axs = plt.subplots(figsize = (19,10), ncols=5, gridspec_kw=dict(width_ratios=[5,6,5,6,1]));
  sns.heatmap(gw15_df, cbar = False, ax = axs[0], cmap = 'rainbow', vmin=0, vmax=1); axs[0].set_title('GW15', size=25); 
  sns.heatmap(gw20_df, yticklabels = False, cbar = False, ax = axs[1], cmap = 'rainbow', vmin=0, vmax=1); axs[1].set_title('GW20', size=25);
  sns.heatmap(gw22_df, yticklabels = False, cbar = False, ax = axs[2], cmap = 'rainbow', vmin=0, vmax=1); axs[2].set_title('GW22', size=25);
  sns.heatmap(gw34_df, yticklabels = False, cbar = False, ax = axs[3], cmap = 'rainbow', vmin=0, vmax=1); axs[3].set_title('GW34', size=25);
  fig.colorbar(axs[0].collections[0], cax=axs[4]);
  plt.title(gene, fontsize=20);
  axs[0].tick_params(axis='both', which='major', labelsize=15)
  axs[1].tick_params(axis='both', which='major', labelsize=15)
  axs[2].tick_params(axis='both', which='major', labelsize=15)
  axs[3].tick_params(axis='both', which='major', labelsize=15)
  plt.tight_layout();
  plt.savefig(gene + '_10.png', dpi=500);
  plt.clf()

#genes = ['CBLN2', 'CPNE8', 'B3GNT2', 'SRM', 'STK32B', 'VSTM2L', 'PENK']
genes = adata.var.index

def main():
  with Pool(len(genes)) as pool:
    pool.map(make_heatmap,genes)

if __name__=="__main__":
    main()
图片
图片

分析各皮层层级中神经元亚型的区域(areal)特化情况。尽管 H2 层级的兴奋性神经元(EN)细胞类型在前后轴(AP axis)上的分布基本保持均一,但许多 H3 层级的 EN 亚型(25个 EN-ET 亚型中有7个,24个 EN-IT 亚型中有15个)表现出在前部或后部区域的逐渐富集趋势,并形成连续的梯度分布模式(图 g)。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
import numpy as np
import scanpy as sc
import matplotlib.pyplot as plt
from multiprocessing import Pool
from matplotlib_scalebar.scalebar import ScaleBar
from itertools import repeat


adata = sc.read('merscope_integrated_855.h5ad')
adata.obs_names_make_unique()

def make_plot(sample, region, types, i):
    adata1 = adata[(adata.obs['sample']==sample) & (adata.obs.region==region)].copy()
    colors = ['red', 'blue']
    color_dict = dict(zip(types, colors))
    color_dict.update(dict(zip(np.setdiff1d(adata.obs.H3_annotation.unique(), types), ['grey']*len(np.setdiff1d(adata.obs.H3_annotation.unique(), types)))))
    plot = sc.pl.embedding(adata1, basis="spatial", color = 'H3_annotation', groups = types, show = False, s=2, palette = color_dict, alpha=1); plt.axis('off');
    plot.set_aspect('equal');
    handles, labels = plt.gca().get_legend_handles_labels();
    order = [labels.index(i) for i in types];    
    plt.legend([handles[idx] for idx in order],[labels[idx] for idx in order], loc = 'center', fontsize=2, ncol = 2, bbox_to_anchor=(1.0,1.0), markerscale=0.25);
    plot.get_figure().gca().set_title('');
    scalebar = ScaleBar(1, "um", fixed_value=500, location = 'lower right');
    plot.add_artist(scalebar);
    plt.tight_layout();
    plt.savefig(sample + '_' + region + '_' + str(i) + '.png', dpi=500); plt.clf()


samples = ['FB123-F1', 'FB123-F2', 'FB123-P1', 'FB123-O2']
types_all = [('EN-IT-L2/3-A2', 'EN-IT-L3-P'), ('EN-IT-L4-A', 'EN-IT-L4-late'), ('EN-IT-L4/5-1', 'EN-IT-L5/6-P'), ('EN-ET-L6-A', 'EN-ET-L6-P'), ('EN-ET-SP-A', 'EN-ET-SP-P1')]



def main():
  with Pool(5) as pool:
    for sample in samples:
      pool.starmap(make_plot, zip(repeat(sample.split('-')[0]), repeat(sample.split('-')[1]), types_all, range(5)))

if __name__=="__main__":
    main()
图片
图片

进一步分析所有显著的 DEGs(调整后的 P 值 < 0.05,log[fold change] > 0.5)后,识别出:有4个基因在五类神经元中均在前部富集,另有8个基因在后部富集。这些基因在整体 EN 群体中也展现出强烈的前部或后部富集特征,提示它们可作为皮层区域身份的标志基因(图 h)。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
library(anndata)
library(stringr)
library(Matrix)
library(MASS)
library(reticulate)
library(Seurat)
library(dplyr)
library(splitstackshape)
library(ggplot2)
library(reshape2)
library(cowplot)
library(glue)
# library(Rfast)

# use_python("/usr/local/bin/python3.10")
align_legend <- function(p, hjust = 0.5) {
# extract legend
  g <- cowplot::plot_to_gtable(p)
  grobs <- g$grobs
  legend_index <- which(sapply(grobs, function(x) x$name) == "guide-box")
  legend <- grobs[[legend_index]]

# extract guides table
  guides_index <- which(sapply(legend$grobs, function(x) x$name) == "layout")

# there can be multiple guides within one legend box  
for (gi in guides_index) {
    guides <- legend$grobs[[gi]]
    
    # add extra column for spacing
    # guides$width[5] is the extra spacing from the end of the legend text
    # to the end of the legend title. If we instead distribute it by `hjust:(1-hjust)` on
    # both sides, we get an aligned legend
    spacing <- guides$width[5]
    guides <- gtable::gtable_add_cols(guides, hjust*spacing, 1)
    guides$widths[6] <- (1-hjust)*spacing
    title_index <- guides$layout$name == "title"
    guides$layout$l[title_index] <- 2
    
    # reconstruct guides and write back
    legend$grobs[[gi]] <- guides
  }

# reconstruct legend and write back
  g$grobs[[legend_index]] <- legend
# group_name <- group
# pdf(paste0(paste0(path, group_name, sep = "/"), ".pdf",sep = ""))
  g
}


bubble_draw <- function(count_select, zs_select) {
  count_select$layer <- factor(rownames(count_select), levels = rownames(count_select))
  zs_select$layer <- factor(rownames(zs_select), levels = rownames(count_select))

  nonzero_sub_melt <- melt(count_select, id = c("layer"))
  expr_sub_melt <- melt(zs_select, id = c("layer"))
  color_map <- expr_sub_melt$value
  mid <- mean(color_map)

  col_min <- 0
  col_max <- max(zs_select[,1:(ncol(zs_select)-1)]) + 0.005

  x = ggplot(nonzero_sub_melt, aes(x = layer, y = variable, size = value, color = color_map)) + 
    # geom_point(aes(size = value, fill = layer), alpha = 1, shape = 21) + 
    geom_point(aes(size = value)) + 
    scale_size_continuous(limits = c(0.000001, 100), range = c(1,10), breaks = c(20, 40, 60, 80)) +
    labs( x= "", y = "", size = "Percentage of expressed cells (%)", fill = "", color = "Relative expression")  +
    theme(legend.title.align = 0.5,
          legend.text.align = 0.5,
          legend.key=element_blank(),
          axis.text.x = element_text(colour = "black", size = 10, face = "bold", angle = 90), 
          axis.text.y = element_text(colour = "black", face = "bold", size = 10), 
          legend.text = element_text(size = 8, face ="bold", colour ="black"), 
          legend.title = element_text(size = 8, face = "bold"), 
          panel.background = element_blank(),  #legend.spacing.x = unit(0.5, 'cm'),
          legend.position = "right",
          legend.box.just = "top",
          legend.direction = "vertical",
          legend.box="vertical",
          legend.justification="center"
    ) +  
    # theme_minimal() +
    # theme(legend.title.align = 0)+
    
    # scale_fill_manual(values = color_map, guide = FALSE) + 
    scale_y_discrete(limits = rev(levels(nonzero_sub_melt$variable))) +
    scale_color_gradient(low="yellow", high="blue", space ="Lab", limits = c(col_min, col_max), position = "bottom") +
    # scale_fill_viridis_c(guide = FALSE, limits = c(-0.5,1.5)) + 
    guides(colour = guide_colourbar(direction = "vertical", barheight = unit(4, "cm"), title.vjust = 4))
return(x)
}




# count_tot <- read.csv("result/prop.csv", row.names = 1)
count_A <- read.csv("result/DEG/prop_A.csv", row.names = 1)
count_P <- read.csv("result/DEG/prop_P.csv", row.names = 1)
count_T <- read.csv("result/DEG/prop_T.csv", row.names = 1)
# gene_temp <- c("NR4A2", "FGFBP3", "MET", "NR2F1", "UNC5C")

# zs_select <- read.csv("result/expr.csv", row.names = 1)
zs_A <- read.csv("result/DEG/expr_A.csv", row.names = 1)
zs_P <- read.csv("result/DEG/expr_P.csv", row.names = 1)
zs_T <- read.csv("result/DEG/expr_T.csv", row.names = 1)
zs_select <- rbind(rbind(zs_A, zs_T), zs_P)

ap_num <- 10
t_num <- 5

if (ap_num == 10) {
  height <- 8.5
} elseif (ap_num == 15 & t_num == 10) {
  height <- 14
} elseif (ap_num == 20 & t_num == 10) {
  height <- 18
}

count_top <- count_A[1:ap_num, ]
count_top <- count_top[order(count_top[,1], decreasing = TRUE), ]
count_temp <- count_T[1:t_num, ]
count_temp <- count_temp[order(count_temp[,1], decreasing = TRUE), ]
count_bottom <- count_P[!rownames(count_P) %in% rownames(count_temp), ]
count_bottom <- count_bottom[1:ap_num, ]
count_bottom <- count_bottom[order(count_bottom[,ncol(count_bottom)], decreasing = FALSE), ]

count_select <- rbind(rbind(count_top, count_temp), count_bottom)
count_select <- count_select * 100

zs_select <- zs_select[rownames(count_select), ]

count_select <- t(count_select)
zs_select <- t(zs_select)
zs_select <- as.matrix(zs_select)
zs_min <- matrix(apply(zs_select, 2, min), nrow = nrow(zs_select), ncol = ncol(zs_select), byrow = TRUE)
zs_max <- matrix(apply(zs_select, 2, max), nrow = nrow(zs_select), ncol = ncol(zs_select), byrow = TRUE)
zs_select <- (zs_select - zs_min) / (zs_max - zs_min)
count_select <- data.frame(count_select)
zs_select <- data.frame(zs_select)

x <- bubble_draw(count_select, zs_select)
ggdraw(align_legend(x))
ggsave( glue("DEG_plot/bubble_{ap_num}_{t_num}.pdf"),
       width = 7, height = height, limitsize = TRUE)


gene_lst <- colnames(count_select)
class_lst <- c("EN-Mig", "RG", "IPC", "IN")
for (class in class_lst) {
  zs_tot <- read.csv(glue("result/expr_{class}.csv"), row.names = 1)
  count_tot <- read.csv(glue("result/prop_{class}.csv"), row.names = 1)
  count_tot <- count_tot * 100
  zs_select <- zs_tot[gene_lst, ]
  count_select <- count_tot[gene_lst, ]
  count_select <- t(count_select)
  zs_select <- t(zs_select)

  zs_select <- as.matrix(zs_select)
  zs_min <- matrix(apply(zs_select, 2, min), nrow = nrow(zs_select), ncol = ncol(zs_select), byrow = TRUE)
  zs_max <- matrix(apply(zs_select, 2, max), nrow = nrow(zs_select), ncol = ncol(zs_select), byrow = TRUE)
  zs_select <- (zs_select - zs_min) / (zs_max - zs_min)
  count_select <- data.frame(count_select)
  count_select <- data.frame(count_select)
  zs_select <- data.frame(zs_select)
  x <- bubble_draw(count_select, zs_select)
  ggdraw(align_legend(x))
  ggsave(glue("DEG_plot/bubble_{class}_{ap_num}_{t_num}.pdf"),
         width = 7, height = height, limitsize = TRUE)
}
图片
图片

总结

作者通过对人类胎儿皮层中基因的层级表达模式进行分析,识别出了一些具有层级依赖性的标记基因,尤其是在中期妊娠阶段(GW22–GW34)表现出强烈的区域特化和时间动态变化。作者构建了情境感知的标记基因集合,并展示了这些基因在不同皮层区域和神经元亚型中的表达差异。结果表明,这些基因可作为皮层区域身份的标志基因,有助于理解皮层层级和区域的特化过程。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2025-06-02,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 BioOmics 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
Docker 基础知识 - 使用卷(volume)管理应用程序数据
卷(volumes)是 Docker 容器生产和使用持久化数据的首选机制。绑定挂载(bind mounts)依赖于主机的目录结构,卷(volumes)完全由 Docker 管理。卷与绑定挂载相比有几个优势:
用户8803964
2021/07/05
3.9K1
docker数据管理
用户在使用Docker的过程中,往往需要能查看容器内应用产生的数据,或者需要把容器内的数据进行备份,甚至多个容器之间进行数据共享,这必然涉及到容器的数据管理操作。
yuezhimi
2020/09/30
7890
​Docker数据管理
在前面我们详细学习了docker的三大核心概念:镜像、容器和仓库,接下来开始学习如何管理数据。在实际工作中使用docker,往往需要对数据进行持久化,或者需要在多个容器之间进行数据共享,此时必然会使用到容器数据管理的各种操作。
啃饼思录
2020/12/29
1.4K0
​Docker数据管理
容器中的数据管理
本节学习的内容是如何管理容器中的数据以及容器之间的数据,我们将要学习如下两个主要方式:
字母哥博客
2020/09/23
8930
Docker 数据管理
我们已经熟悉了 -v 或者 --volume,官方最近建议( Docker 17.06+ ) 使用 --mount。 官方文档:https://docs.docker.com/engine/admin/volumes/ 类型 bind volume tmpfs source source 或 src destination destination 或 dst 或 target volumes 创建 volume $ docker volume create VOLUME_NAME $ dock
康怀帅
2018/02/28
1.6K0
Docker数据管理
数据卷 ( Data Volumes ) 是一个可供容器使用的特殊目录,它将主机操作系统目录直接映射进容器,类似于 Linux 中的 mount 行为 。
海盗船长
2021/12/07
8680
No zuo no die ,用Docker安装Mysql
回显,GENERATED ROOT PASSWORD: Axegh3kAJyDLaRuBemecis&EShOs
birdskyws
2018/09/12
1.2K0
No zuo no die ,用Docker安装Mysql
Docker学习笔记之Docker的数据管理和存储
数据是应用程序重要的产出,所以很好的管理和存储数据,是对应用程序劳动结果的尊重。特别是在大数据时代,所有的数据都是重要的资产,保护好数据是每个开发者必须掌握的技能。我们知道,在 Docker 里,容器运行的文件系统处于沙盒环境中,与外界其实是隔离的,那么我们又要如何在 Docker 中合理的通过文件与外界进行数据交换呢?在这一小节中,我们就来介绍 Docker 中与文件数据有关的内容。
Jetpropelledsnake21
2019/03/04
1K0
Docker学习笔记之Docker的数据管理和存储
Docker 入门到实战教程(六)Docker数据卷
一. 前言 上一篇介绍到如何构建镜像以及镜像管理,不知道大家学到现在有没有疑问?比如我运行web服务产生的日志,我如何在宿主机上看到?我想安装mysql或者redis等,配置文件如何配置,可以进到容器
小东啊
2020/07/23
1.6K0
Docker 入门到实战教程(六)Docker数据卷
Docker核心技术之数据管理
为解决这些问题,docker加入了数据卷(volumes)机制,能很好解决上面问题,以实现:
Lansonli
2021/10/09
4250
Docker实践(三):数据持久化及共享
 在Linux上运行的Docker有三种不同的方式将数据从 Docker Host挂载到 Docker 容器,并实现数据的读取和存储:volumes、bind mounts、tmpfs。  
loong576
2019/09/10
9320
Docker实践(三):数据持久化及共享
Docker应用程序数据管理与持久化
默认情况下,container内部新创建文件或者修改文件,结果会保存在container的可读写层中,这意味着:当container消失时,与container一体的可读写层也一并消失,数据并没有持久化。并且,当一个container需要其它container中可读写层的数据时,取出操作非常困难。
Power
2025/03/03
1420
Docker如何管理数据
http://os.51cto.com/art/201406/443516.htm 到目前我们介绍了一些Docker的基础概念, 知道了如何使用Docker的p_w_picpath, 也知道了如何在多个container间通过网络通讯. 在这章里我们将介绍如何在docker的container内管理数据以及如何在不同的container间共享数据。 我们将介绍两种主要的在docker中管理数据的方法: Data volumes Data volume container Data volumes 一个 data volume 就是一个在一个或者多个container里的特殊用途的目录。它绕过了 Union File System (译者: 这里不确定, 需要研究)为持久化数据、共享数据提供了下面这一些有用的特性: Data volumes 可以在不同的container之间共享和重用数据 对 Data volume 的修改及时生效(译者:data volumn是一个目录, 多个container都挂载这个目录, 具体的可以通过 docker inspect 看 volumne的信息) 对 data volume 修改内容在升级p_w_picpath的时候不会被包括进去 (译者:在docker的整个设计中p_w_picpath是一个无状态的, 这样对升级重用非常有利。而标记状态的数据, 比如数据库的数据, 生产的log之类的应该放到volume里。volume的持久化和恢复在下面有介绍, 是通过文件的形式的, 而不是通过p_w_picpath) Volumes 的持久化直到没有container使用他们 添加数据卷 你可以在docker run 的时候使用 -v 来添加一个 data volume。这个参数在docker run 的时候可以多次使用来添加多个 data volumes。让我们为我们的web application container挂载一个 volume。 $ sudo docker run -d -P --name web -v /webapp training/webapp python app.py 这里一个新的volume会创建到container里的 /webapp. (译者:如果你通过ssh或者通过 -i 登陆到你的container的一个shell里, 使用 ls /webapp 可以验证挂载成功了) 注意: 你也可以在Dockerfile里添加 VOLUME 字段,这样在创建一个新的p_w_picpath的 container是就会自动的创建新的volume. 安装一个目录作为数据卷 使用 -v 不仅能创建一个新的 volume, 还可以把宿主机一个目录mount到container里。 $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py 这条命令会把本地目录 /src/webapp mount到container里的 /opt/webapp 目录上。用这个方法来测试程序非常 方便, 比如我们可以把我们的源代码通过这个方法mount到container里, 修改本地代码后立即就可以看到修改后的代码是如何在container里工作的了。宿主机的目录必须是绝对路径, 如果这个目录不存在docker会为你自动创建。 注意 这里是没法用 Dockerfile实现的, 因为这样的用法有悖于可移植性和共享. 因为本地目录就像他名字告诉我们的, 是和本地相关的, 不一定可以在所有的宿主机上工作.(译者: 鬼知道你在使用p_w_picpath的时候的host是啥样子的) Docker默认设置volume是可读写的,但是我们也可以mount一个目录为只读: $ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py 这里我们同样mount了 /src/webapp 目录, 但是我们加上了 ro 参数, 告诉docker这个volume是只读的. 创建并安装数据卷容器 如果你有一些持久化的数据, 并且想在不同的container之间共享这些数据, 或者想在一些没有持久化的container中使用, 最好的方法就是使用 Data Volumn Container, 在把数据mount到你的container里.(译者:如开篇译者提到的docker的container是无状态的, 也就是说标记状态的数据,例如:数据库数据, 应用程序的log 等等, 是不应该放到container里的, 而是放到 Data Volume Container里, 这点和f
DevinGeng
2019/04/09
1.1K0
Docker学习——数据管理、使用网络(三)
这一章介绍如何在 Docker 内部以及容器之间管理数据,在容器中管理数据主要有两种方式:
wuweixiang
2018/12/06
5790
06. 管理Docker容器数据
在生产环境中使用 Docker,一方面,需要对数据进行保存或者在多个容器之间进行数据共享;另一方面,在 Docker 的容器被删除后,并不会保留容器的状态信息。那么如何实现信息的持久化呢?这必然涉及容器的数据管理。
有一只柴犬
2024/01/23
1710
06. 管理Docker容器数据
06. 管理Docker容器数据
在生产环境中使用 Docker,一方面,需要对数据进行保存或者在多个容器之间进行数据共享;另一方面,在 Docker 的容器被删除后,并不会保留容器的状态信息。那么如何实现信息的持久化呢?这必然涉及容器的数据管理。
有一只柴犬
2024/01/25
1630
06. 管理Docker容器数据
Docker 学习笔记-数据管理
我们在使用 docker 的时候会将一些数据(例如网站文件、配置文件、数据库文件等)存储在容器中。这样存在一个严重的问题,如果容器出现损坏(例如无法启动,被删除等)那么存储在容器中的数据就会丢失,即使我们进行了容器备份,数据也不可能恢复到故障发生时。如果要解决这个问题,我们就需要用到 docker 的数据管理。在 docker 中数据管理一共有两种方式,分别是数据卷和数据卷容器,下面我们来一一讲解。
喵叔
2020/09/08
5340
Docker 数据管理介绍
默认容器的数据是保存在容器的可读写层,当容器被删除时其上的数据也会丢失,所以为了实现数据的持久性则需要选择一种数据持久技术来保存数据。官方提供了三种存储方式:Volumes、Bind mounts和tmpfs。前面还介绍了:Docker 服务终端 UI 管理工具
民工哥
2021/03/15
7590
Docker 数据管理介绍
Docker存储卷
Docker镜像由多个只读层叠加而成,启动容器时,Docker会加载只读镜像层并在镜像栈顶部添加一个读写层。
Alone-林
2022/08/23
8880
Docker存储卷
02、数据卷(Data Volumes)以及dockefile详解
天蝎座的程序媛
2023/10/17
5650
02、数据卷(Data Volumes)以及dockefile详解
相关推荐
Docker 基础知识 - 使用卷(volume)管理应用程序数据
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验