我希望能够将我的MicroStrategy环境中存在的所有报告的所有SQL代码下载到网络上我的个人文件夹或c:驱动器上的一个文件夹中。有可能吗?或者有没有一个可以在MicroStrategy中访问的表,其中存储了所有的sql代码?
发布于 2021-03-15 06:49:02
python包API使得从"mstrio-py"获取这类信息变得更容易。关于如何安装/使用一些示例的指南也可以在那里找到。
据我所知,该API不公开report-sql-views。但是,如果你也需要这些东西,这个KB article可能会有所帮助。
下面是我最近使用的一个快速示例,将一些信息混合在一起。
注意:您可能感兴趣的内容是不同的,“新运行”的SQL与运行(多维数据集)缓存实例的SQL有所不同。
from mstrio.connection import Connection
from mstrio.admin.application import Environment, Application, compare_application_settings
from mstrio.cube import Cube
from mstrio.report import Report
from mstrio.admin import dataset
from typing import Union, List
from json.decoder import JSONDecodeError
#connection info
base_url = "https://HOST:PORT/MicroStrategyLibrary/api"
username = "USERNAME"
password = "PASSWORD"
project = "PROJECTNAME"
conn = Connection(base_url, username, password, project_name=project, login_mode=1) #1==standard login
env = Environment(connection=conn)
#do not filter subtype if you want to include user-uploaded cubes
datasets = dataset.list_datasets(connection=conn, subtype=[776])
# ROUTE1
# this only shows the SQL that would be used if you ran the cube right now. if you want to check what has been used use ROUTE2
# extra call for every sqlView is needed
for i,cube in enumerate(datasets,0):
with conn.session.get(url=conn.base_url + '/api/v2/cubes/' + cube.id + '/sqlView') as res:
try:
z.append((datasets[i], res.json()))
except JSONDecodeError:
print("Error")
# ROUTE2
# alternatively grab every cached cube from monitor to get cache-ids. this is a bit different, this will show you the SQL of the data you already have retrieved
response = conn.session.get(url=f'{conn.base_url}/api/monitors/caches/cubes',
params={'clusterNode': IS_NODE_NAME,
'projectIds': conn.project_id})
monitor_caches = response.json()
# grab info for every cube cache found, there is a lot of additional info available
for cache in monitor_caches['cubeCaches']:
response = conn.session.get(f'{conn.base_url}/api/monitors/caches/cubes/{cache["id"]}',
headers={'X-MSTR-ProjectID': conn.project_id})
monitor_cube_instance = response.json()
#there may be multiple passes, simply appending might not be a good strategy
cube_cache_sql = ', '.join([x['sqlStatement'] for x in monitor_cube_instance['jobExecutionStatistics']['queryPasseInfos']])
https://stackoverflow.com/questions/66571598
复制相似问题