首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >有没有办法把在MicroStrategy中创建的所有sql代码下载到MicroStrategy之外的文档中?

有没有办法把在MicroStrategy中创建的所有sql代码下载到MicroStrategy之外的文档中?
EN

Stack Overflow用户
提问于 2021-03-11 03:29:41
回答 1查看 87关注 0票数 0

我希望能够将我的MicroStrategy环境中存在的所有报告的所有SQL代码下载到网络上我的个人文件夹或c:驱动器上的一个文件夹中。有可能吗?或者有没有一个可以在MicroStrategy中访问的表,其中存储了所有的sql代码?

EN

回答 1

Stack Overflow用户

发布于 2021-03-15 06:49:02

python包API使得从"mstrio-py"获取这类信息变得更容易。关于如何安装/使用一些示例的指南也可以在那里找到。

据我所知,该API不公开report-sql-views。但是,如果你也需要这些东西,这个KB article可能会有所帮助。

下面是我最近使用的一个快速示例,将一些信息混合在一起。

注意:您可能感兴趣的内容是不同的,“新运行”的SQL与运行(多维数据集)缓存实例的SQL有所不同。

代码语言:javascript
运行
复制
from mstrio.connection import Connection
from mstrio.admin.application import Environment, Application, compare_application_settings
from mstrio.cube import Cube
from mstrio.report import Report
from mstrio.admin import dataset
from typing import Union, List
from json.decoder import JSONDecodeError

#connection info
base_url = "https://HOST:PORT/MicroStrategyLibrary/api"
username = "USERNAME"
password = "PASSWORD"
project = "PROJECTNAME"
conn = Connection(base_url, username, password, project_name=project, login_mode=1) #1==standard login
env = Environment(connection=conn)

#do not filter subtype if you want to include user-uploaded cubes
datasets = dataset.list_datasets(connection=conn, subtype=[776])

# ROUTE1
# this only shows the SQL that would be used if you ran the cube right now. if you want to check what has been used use ROUTE2
# extra call for every sqlView is needed
for i,cube in enumerate(datasets,0):
    with conn.session.get(url=conn.base_url + '/api/v2/cubes/' + cube.id + '/sqlView') as res:
        try:
            z.append((datasets[i], res.json()))
        except JSONDecodeError:
            print("Error")

# ROUTE2
# alternatively grab every cached cube from monitor to get cache-ids. this is a bit different, this will show you the SQL of the data you already have retrieved
response = conn.session.get(url=f'{conn.base_url}/api/monitors/caches/cubes',
                      params={'clusterNode': IS_NODE_NAME,
                             'projectIds': conn.project_id})
monitor_caches = response.json()

# grab info for every cube cache found, there is a lot of additional info available
for cache in monitor_caches['cubeCaches']:
    response = conn.session.get(f'{conn.base_url}/api/monitors/caches/cubes/{cache["id"]}',
                                headers={'X-MSTR-ProjectID': conn.project_id})
    monitor_cube_instance = response.json()
    #there may be multiple passes, simply appending might not be a good strategy
    cube_cache_sql = ', '.join([x['sqlStatement'] for x in monitor_cube_instance['jobExecutionStatistics']['queryPasseInfos']])
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66571598

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档