我正在运行一个非常简单的Spark (2.4.0 on Databricks) ML脚本: from pyspark.ml.clustering import LDA
lda = LDA(k=10,<int>,values:array<double>>, array<double>, array<float>] but was actually of type array<double>
我正在尝试创建一个模式来验证正在加载的文件:StructField("type", StringType()), StructField("coordinates", ArrayType(DoubleType())), # POINT
StructField("coordinates", ArrayType(ArrayType(ArrayType(DoubleType())))), # POL
extension Array where Element : Double { return self.map但是,$0是一个Double,并且有一个Float.init(other:Double)初始化器。有什么问题吗?编辑:Float.init(other:Double)最初由编译器建议,快照: