我正在使用PVlib为PV阵列建模,有时当我试图访问天气预报数据时,我会得到以下错误:
ValueError: Big-endian buffer not supported on little-endian compiler
我不确定为什么它只是偶尔发生,而不是每次我运行代码的时候。下面是我正在运行的代码,最后一行是导致错误的那一行。任何帮助解决这个问题的人都将不胜感激,谢谢!
# built-in python modules
import datetime
import inspect
import os
import pytz
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
#import the pvlib library
from pvlib import solarposition,irradiance,atmosphere,pvsystem
from pvlib.forecast import GFS
from pvlib.modelchain import ModelChain
pd.set_option('display.max_rows', 500)
latitude, longitude, tz = 21.300268, -157.80723, 'Pacific/Honolulu'
# specify time range.
# start = pd.Timestamp(datetime.date.today(), tz=tz)
pacific = pytz.timezone('Etc/GMT+10')
# print(pacific)
# datetime.datetime(year, month, day, hour, minute, second, microsecond, tzinfo)
start2 = pd.Timestamp(datetime.datetime(2020, 2, 10, 13, 0, 0, 0, pacific))
# print(start)
# print(start2)
# print(datetime.date.today())
end = start2 + pd.Timedelta(days=1.5)
# Define forecast model
fm = GFS()
# get data from location specified above
forecast_data = fm.get_processed_data(latitude, longitude, start2, end)
# print(forecast_data)
发布于 2020-02-29 15:54:19
我想我现在有一个解决方案了。由于某些原因,来自这些UNIDATA DCSS查询的数据偶尔会返回大端字节。这与讨论的here中的熊猫数据帧或系列对象不兼容。我在PVLIB中找到了从NetCDF4获取数据并创建Pandas Dataframe的函数。查看pvlib
内部,然后查看forecast.py
,该函数名为_netcdf2pandas
。我将复制下面的源代码:
data_dict = {}
for key, data in netcdf_data.variables.items():
# if accounts for possibility of extra variable returned
if key not in query_variables:
continue
squeezed = data[:].squeeze()
if squeezed.ndim == 1:
data_dict[key] = squeezed
elif squeezed.ndim == 2:
for num, data_level in enumerate(squeezed.T):
data_dict[key + '_' + str(num)] = data_level
else:
raise ValueError('cannot parse ndim > 2')
data = pd.DataFrame(data_dict, index=self.time)
目标是将NetCDF4数据压缩到单个熊猫系列中,将每个系列保存到字典中,然后将所有数据导入数据框并返回。我所做的就是在这里添加一个检查,以确定压缩的序列是否为Big-Endian,并将其转换为Little-Endian。我修改后的代码如下:
for key, data in netcdf_data.variables.items():
# if accounts for possibility of extra variable returned
if key not in query_variables:
continue
squeezed = data[:].squeeze()
# If the data is big endian, swap the byte order to make it little endian
if squeezed.dtype.byteorder == '>':
squeezed = squeezed.byteswap().newbyteorder()
if squeezed.ndim == 1:
data_dict[key] = squeezed
elif squeezed.ndim == 2:
for num, data_level in enumerate(squeezed.T):
data_dict[key + '_' + str(num)] = data_level
else:
raise ValueError('cannot parse ndim > 2')
data = pd.DataFrame(data_dict, index=self.time)
我使用this Stack Overflow answer来确定每个序列的字节顺序。SciPy documentation给了我一些线索,告诉我这些字节顺序可能是什么类型的数据。
Here is my pull request to pv-lib that fixes the problem for me。我希望这能帮到你。我仍然不知道为什么这个问题是不一致的。大约95%的情况下,我的get_processed_data
尝试都会失败。当它起作用时,我认为我找到了修复方法,然后Pandas就会抛出endian错误。在对pv-lib进行修复之后,我不会再收到来自Pandas的关于大小字节序的错误。
https://stackoverflow.com/questions/60161759
复制相似问题