我正在尝试将每个worker节点(每个元素都是Pandas DataFrame的RDD )上的Pandas DataFrame转换为跨所有worker节点的Spark DataFrame。the data is a pandas dataframe, and I am using some datetime
indexing which isn't available for spark在完成熊猫处理后,我如何将其转换为Spark DataFrame?我试着做rdd = rdd.map(spark