首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >SRE Interview Questions and Answers Simulation - Monitoring and Logging

SRE Interview Questions and Answers Simulation - Monitoring and Logging

原创
作者头像
行者深蓝
发布于 2024-09-08 02:45:33
发布于 2024-09-08 02:45:33
1630
举报

Monitoring

1. Metrics, Events/Logs, Tracing, and Profiling

  • Metrics: Real-time data, typically used for system monitoring.
  • Events/Logs: Event records used for tracking issues.
  • Tracing: Tracks the flow path of requests to help analyze performance bottlenecks.
  • Profiling: Analyzes program performance to identify bottlenecks and optimization points.

2. Metrics

  • Q: What are Metrics? A: Metrics are time-series data representing numerical values of system states and performance. They are regularly collected and recorded, such as CPU usage, memory consumption, and request response times.
  • Q: What are common monitoring metrics? A: These include resource usage (e.g., CPU, memory), application performance (e.g., request response time, error rates), and system health (e.g., Pod status).
  • Q: How to optimize the scraping frequency and storage strategy of Metrics? A: Optimize performance and storage by adjusting scraping frequency, using efficient storage and compression techniques, and setting a reasonable retention strategy.

3. Logs

  • Q: What are Logs? A: Logs record detailed system events and states, including application logs and system logs, to help analyze and troubleshoot issues.
  • Q: What are common types of logs? A: Application logs (logs from running applications), system logs (e.g., syslog), and Kubernetes container logs.
  • Q: How to manage and analyze large volumes of logs? A: Use centralized log management tools (e.g., ELK, Loki), apply log filtering, indexing, and persistence, and integrate log analysis with Metrics.

4. Events

  • Q: What are Events? A: Events record important state changes or behaviors in the system, such as the creation of Pods or the restart of containers in Kubernetes.
  • Q: How to effectively manage and analyze events? A: Use event-driven monitoring, trigger alerts or automated actions based on events, and optimize the collection and processing of event streams.

5. Tracing

  • Q: What is Tracing? A: Tracing records the path of requests across distributed systems, helping to understand service call chains and locate performance bottlenecks.
  • Q: What are common distributed tracing tools? A: Jaeger, Zipkin, OpenTelemetry.
  • Q: How to optimize the sampling rate for distributed tracing? A: Set a reasonable sampling rate to balance the precision of tracing data and the performance overhead on the system.

6. Profiling

  • Q: What is Profiling? A: Profiling records performance data of an application, such as CPU usage and memory allocation, helping to identify performance bottlenecks.
  • Q: What are common profiling tools? A: Go pprof, JVM Profiling, BPF/BCC.

7. APM (Application Performance Monitoring)

  • Q: What is APM? A: APM monitors application performance, including response times, throughput, and the performance of dependent services.
  • Q: What is the main purpose of APM? A: It helps identify performance bottlenecks, slow queries, memory leaks, and optimize application performance.

8. eBPF (Extended Berkeley Packet Filter)

  • Q: What is eBPF? A: eBPF is a kernel mechanism for efficiently capturing and analyzing system-level events, such as network traffic and system calls.
  • Q: How does eBPF differ from traditional monitoring? A: eBPF captures data directly at the kernel level, avoiding the performance overhead of user-space monitoring tools.

9. Agent

  • Q: What is a monitoring Agent? A: An Agent is a component that resides in the system to collect and send data, monitoring Metrics, Logs, and Traces.
  • Q: What are common Agent tools? A: Prometheus's Node Exporter, Fluentd, Telegraf, Datadog Agent.

10. OpenTelemetry

  • Q: What is OpenTelemetry? A: OpenTelemetry is an open-source framework that standardizes the collection of Metrics, Logs, and Traces, supporting cross-platform, multi-language observability.
  • Q: How does OpenTelemetry differ from traditional monitoring tools? A: OpenTelemetry provides standardized interfaces, supporting data collection and processing across multiple platforms and languages.

11. Prometheus Workflow and Metric Types

  • Workflow:
    • Data Scraping: Prometheus regularly pulls metrics data from configured endpoints.
    • Storage: Data is stored in a local time-series database.
    • Querying: Users query data through PromQL.
    • Alerting: Alerts are triggered based on configured alert rules.
    • Notification: Alerts are sent to notification systems.

12. Metric Types

  • Counter: A monotonically increasing counter, usually used to record the number of events (e.g., total HTTP requests).
  • Gauge: A value that can increase or decrease, representing a state (e.g., CPU usage).
  • Histogram: Records data distribution, mainly used for measuring response times (e.g., API response time).
  • Summary: Similar to a Histogram but provides more granular data (e.g., request latency percentiles).

13. Prometheus Service Discovery

代码语言:shell
AI代码解释
复制
- **Kubernetes**: Automatically discovers Pods and services.
- **Consul**: Uses Consul's service registration and discovery mechanisms.
- **Zookeeper**: Registers and discovers services through Zookeeper.
- **DNS**: Uses DNS SRV records for service discovery.
- **File-based**: Service discovery via static configuration files.

14. Common Prometheus Functions

代码语言:shell
AI代码解释
复制
- **rate()**
- **sum()**
- **avg()**
- **max()**
- **min()**
- **increase()**
- **histogram\_quantile()**

15. Thanos Architecture

  • Thanos is an extension of Prometheus providing long-term storage, global querying, and high availability. Main components include:
    • Thanos Sidecar: Deployed with Prometheus, uploads data to object storage.
    • Thanos Store: Reads data from object storage and supports queries.
    • Thanos Query: A unified query interface aggregating data from multiple Prometheus instances.
    • Thanos Compactor: Compresses stored data.
    • Thanos Ruler: Executes Prometheus rules and stores results in object storage.

16. Thanos vs. VictoriaMetrics

  • Thanos: Mainly extends Prometheus, providing long-term storage and global querying.
  • VictoriaMetrics: A high-performance time-series database compatible with Prometheus data format, offering efficient storage and querying.

17. Difference between Thanos Sidecar and Receive

  • Thanos Sidecar: Deployed alongside each Prometheus instance, uploads data to object storage and supports global querying.
  • Thanos Receive: Handles data reception from multiple Prometheus instances, enabling a highly available write path and data aggregation.

18. Thanos Rule Component vs. Prometheus

  • Thanos Rule: Executes Prometheus rules and stores results in object storage, supporting cross-cluster rule processing.
  • Prometheus: Has a built-in rule engine, with rules limited to the local Prometheus instance.

19. Prometheus Alerts

  • From Trigger to Notification Delay: Could involve data scraping frequency, rule evaluation intervals, and notification delivery delays.
  • Alert Suppression: Configurable rules to reduce duplicate alerts.
  • High Availability Alert Architecture: Use multiple Prometheus instances and Alertmanager for high availability.

20. Pod Metrics

  • WSS (Working Set Size): Indicates the amount of memory currently used by a process.
  • RSS (Resident Set Size): Indicates the actual physical memory used by a process.

21. Monitoring Optimization

  • Golden Metrics: Include latency, throughput, error rate, and saturation.
  • Optimizing Prometheus Performance: Use partitioning, optimize queries, and adjust sampling intervals.

22. Automated Responses and Data Persistence

  • Automated Alert Response: Integrate automation tools (e.g., Ansible) or use Alertmanager’s webhook functionality.

23. Data Compression and Persistence

Prometheus uses compression algorithms to store time-series data, and Thanos provides long-term storage solutions. Prometheus data compression and persistence principles: Prometheus stores data using TSDB (time-series database), applying efficient block storage and data compression algorithms (e.g., Gorilla compression) to reduce storage space.

24. kubectl top vs. Linux free Command Inconsistencies

kubectl top shows container-level resource usage, whereas free shows the overall node's memory usage, leading to discrepancies due to container overhead and cache differences.

25. Exporter and Troubleshooting

  • Common Exporters: Node Exporter, Blackbox Exporter, Redis Exporter, etc., used to expose different system metrics.
  • Troubleshooting: Check Prometheus logs, configuration files, target states, and ensure the exporter is functioning properly.

26. Target Down Troubleshooting

Check target network connectivity, Prometheus scraping configuration, and exporter status for issues when a target is down.

27. Prometheus Pull Model vs. Zabbix Push Model

  • Prometheus Pull Model: Prometheus periodically pulls data from target systems, making it suitable for dynamic environments and short-lived targets.
  • Zabbix Push Model: Target systems actively push data to Zabbix, which is ideal for static environments and scenarios that require mandatory data pushing.

28. Prometheus Operator

  • Adding Targets and Alert Rules: Targets and alert rules can be configured through the Custom Resource Definitions (CRDs) of the Prometheus Operator.

29. Exporter Outside the Kubernetes Cluster

  • Monitoring: In Prometheus configuration, add relevant jobs and targets to collect metrics from outside the Kubernetes cluster.

30. APM and eBPF Agent

  • APM (Application Performance Monitoring): Monitors application performance and provides in-depth application-level metrics.
  • eBPF (Extended Berkeley Packet Filter): Used for high-performance kernel-level monitoring, providing fine-grained system data.

31. OpenTelemetry

  • OpenTelemetry: An open standard that provides a unified way to collect, process, and export metrics, logs, and traces data.

32. Building an Observability Platform

  • Q: How to build a comprehensive observability platform? A: By integrating metrics, logs, tracing, and profiles, design a unified monitoring platform that supports multi-data source integration, automated alerting, and high availability.
  • Q: How to ensure high availability for the observability platform? A: Achieve high availability by ensuring redundancy of platform components, load balancing, and designing effective data storage and query optimization strategies.

ELK

Elasticsearch (ES) and related technologies involve deep discussions on indexing principles, storage mechanisms, performance optimization, and architecture design. Below are brief answers to each topic:

1. ES Indexing Principle

  • Elasticsearch writes documents to one or more shards, each of which is a Lucene index. Documents are written to an in-memory transaction log (translog) and are periodically flushed to Lucene index files on disk.

2. ES Storage Principle

  • Elasticsearch uses the Lucene library to store data. Data is partitioned into shards, each having its own inverted index, storage files, and transaction log. Data is stored in the form of JSON documents.

3. Full-text Search in ES

  • Queries are parsed and transformed into Lucene queries. ES looks up matching documents in the inverted index, calculates relevance scores, and returns the matching results.
  • ES Write Performance Optimization: Use bulk operations, adjust index refresh frequency, optimize the number and size of shards, configure appropriate memory and filesystem settings, and tune merge policies.

4. ES Query Performance Optimization

  • Optimize index mappings, fine-tune query syntax, use caches (e.g., query cache), configure the appropriate number of shards and replicas, and monitor and adjust JVM memory settings.

5. Troubleshooting High JVM Usage in ES

  • Monitor JVM garbage collection (GC) logs, analyze heap memory usage, check for thread and lock contention, and optimize ES configuration by adjusting heap size and garbage collectors.

6. ES Fleet Server Architecture

  • Fleet: A component of the Elastic Stack for centralized management of Elastic Agents. It provides a unified interface for managing and monitoring Elastic Agent instances.

7. Comparison of ClickHouse, Loki, and ES

  • ClickHouse: Best suited for high-performance, real-time analytics, especially for large-scale data aggregation queries.
  • Loki: Focuses on log data collection and storage, optimized for large-scale log data handling.
  • ES: Provides robust full-text search and flexible querying, ideal for scenarios requiring powerful search and analysis capabilities.

8. ES Full GC Troubleshooting

  • Check JVM GC logs, analyze the cause of Full GC, adjust heap size and garbage collector settings, and optimize ES indexing and query configurations.

9. Difference Between Young GC and Old GC in ES

  • Young GC: Focuses on collecting garbage in the young generation, occurring frequently for newly created objects.
  • Old GC: Collects garbage in the old generation, occurring less frequently but taking longer, dealing with long-lived objects.

10. Purpose of ES Versioning

  • The version field resolves concurrent update issues, ensuring that document updates do not overwrite other client updates.

11. ES Aggregation Types

  • Bucket Aggregation: Groups documents into buckets, e.g., by date, category.
  • Metric Aggregation: Performs calculations on numeric data, e.g., sum, average.
  • Pipeline Aggregation: Performs further calculations on aggregation results, such as moving averages.

14. How Filebeat Ensures Continuous Log Shipping

  • Filebeat uses built-in log rotation and retry mechanisms, ensuring continued log shipping even in the event of network failures or Filebeat restarts.

33. Data Storage Comparison: ES, Time Series DB, ClickHouse

  1. Elasticsearch (ES):
  • Data Type: Primarily used for log data.
  • Strengths: Powerful full-text search and querying capabilities, flexible index and mapping configurations, rich aggregation queries, and visualization support (e.g., Kibana).
  • Weaknesses: Not optimized for high-frequency time series data, storage and query performance is limited by data volume and index structure.
  • Time Series Database (e.g., Prometheus, InfluxDB):
  • Data Type: Optimized for time-series data (metrics).
  • Strengths: High-performance storage and query capabilities for time-series data, efficient storage compression, and built-in graphing and alerting features.
  • Weaknesses: Not suitable for non-time-series data (e.g., logs or complex text data).
  • ClickHouse:
  • Data Type: Handles large-scale data sets, including time-series data, logs, and complex queries.
  • Strengths: High-performance columnar storage for large-scale data, supports fast OLAP queries and aggregation operations, highly scalable with distributed deployment.
  • Weaknesses: Configuration and maintenance are complex; not specifically designed for time-series data.

Here is the translated Q/A simulation about the evolution of log systems, focusing on the key technologies like ELK (Elasticsearch, Logstash, Kibana) and the Grafana stack (including Grafana, Loki, Tempo), along with their characteristics, evolution, and suitable scenarios:

18. Q1: How has the evolution of log systems impacted enterprise operations and monitoring?

A1: The evolution of log systems has enabled enterprises to handle and analyze large volumes of log data more efficiently. Early log systems mainly focused on collecting and storing logs, whereas modern systems emphasize real-time analysis, visualization, and automated responses. This evolution allows enterprises to identify and resolve issues faster, improve operational efficiency, and gain deeper business insights.

19. Q2: What advantages does the ELK Stack offer in log processing and analysis?

A2: The ELK Stack offers robust log processing and analysis capabilities:

  • Elasticsearch: It stores and searches log data, supporting efficient full-text search and complex queries.
  • Logstash: Responsible for data collection, processing, and forwarding, supporting a wide variety of input and output plugins and data transformation and formatting.
  • Kibana: A visualization tool that helps users create dashboards and reports, facilitating real-time monitoring and data analysis.

20. Q3: How does Grafana’s Loki compare to the ELK Stack?

A3: Loki and ELK Stack both serve log management purposes, but they differ in design and use cases:

  • Loki: Focuses on simplifying log data storage and querying, tightly integrated with Grafana, and is highly efficient at handling large-scale log data. Its design is inspired by Prometheus, with a focus on efficient indexing and storage of logs but lacking full-text search capabilities.
  • ELK Stack: More feature-rich, with advanced search and analysis capabilities, though it might require more resources and configuration to handle complex queries and storage needs.

21. Q4: How should modern log systems be chosen?

A4: Choosing the right log system should consider factors like:

  • Data volume and processing needs: If you need to process large-scale log data and prioritize real-time analysis, Grafana Loki is a good choice. For scenarios that require complex search and analysis capabilities, ELK Stack is more suitable.
  • Integration and compatibility: Consider the integration needs with existing systems. If you already use Grafana for visualization, Loki might be easier to integrate.
  • Resources and management: ELK Stack may require more resources and management, while Loki offers a simplified log processing solution.

22. Q5: How can log storage and query performance be optimized in the ELK Stack?

A5: Performance in the ELK Stack can be optimized by:

  • Index management: Plan index strategies well, regularly optimize and merge indexes, and set appropriate index templates.
  • Hardware configuration: Add more nodes and configure memory and storage properly to improve processing power.
  • Query optimization: Optimize query statements, use proper data type mapping for fields, and enable caching mechanisms.

23. Q6: How does Grafana Tempo facilitate distributed tracing, and how does it work with the ELK Stack?

A6: Grafana Tempo is a high-performance distributed tracing system used for collecting and analyzing request trace data in distributed systems. When integrated with the ELK Stack:

  • Tempo: Works with Grafana to visualize distributed tracing, helping users understand delays and bottlenecks in requests.
  • ELK Stack: Can be used alongside Tempo to correlate log data with tracing data, providing comprehensive system monitoring and troubleshooting capabilities.

24. Q7: How can high availability and data backups be achieved in a log system?

A7: High availability and data backups can be achieved through:

  • ELK Stack: Configure Elasticsearch replicas and snapshots for data redundancy and backup.
  • Grafana Loki: Set up high-availability clusters and backup strategies to ensure reliable log data storage and recovery.
  • Overall: Implement load balancing, failover mechanisms, and regular backup strategies to enhance system reliability and data security.

25. Q8: What is the future trend of log system evolution?

A8: Future trends in log system evolution include:

  • Intelligence and automation: Incorporating more machine learning and artificial intelligence to automatically identify anomalies and offer optimization suggestions.
  • More efficient storage and retrieval: Continuous optimization of log storage formats and retrieval algorithms to improve performance and reduce costs.
  • Cross-platform integration: Enhancing integration with different data sources and platforms, providing a more unified and comprehensive monitoring solution.

These Q/As help understand the evolution of log systems and the pros and cons of related technologies. If there are specific questions or further discussion needed, feel free to ask!

We can also discuss the evolution of observability systems and the trends in internet technology by integrating ELK Stack, Grafana stack (including Loki, Tempo, etc.), and ClickHouse. Below is a Q/A simulation that explores how these technologies have evolved in data collection, processing, analysis, and visualization and how they adapt to modern trends.

26. Q1: How would you evaluate the role of ELK Stack in observability systems, particularly in data storage and querying?

A1: ELK Stack (Elasticsearch, Logstash, Kibana) holds a significant position in observability systems:

  • Elasticsearch: Offers powerful full-text search and complex querying, ideal for storing and analyzing large volumes of log data in real-time.
  • Logstash: Provides flexible data input and processing.
  • Kibana: Features a rich set of visualization tools for creating dashboards and charts, facilitating monitoring and analysis.

However, as data scales, the resource requirements and management complexity of ELK Stack increase, leading to the development of alternative technologies like Grafana Loki and ClickHouse.

27. Q2: What advantages do Grafana’s stack (Loki, Tempo) offer over ELK Stack?

A2: Grafana’s stack offers the following advantages:

  • Loki: Focuses on log data storage and querying, integrates seamlessly with Grafana, and optimizes log indexing and storage for large-scale log data. Inspired by Prometheus, it simplifies log handling and querying.
  • Tempo: Provides distributed tracing, integrating with Grafana to visualize request chains and help identify delays and bottlenecks in systems.
  • Grafana: As a visualization tool, supports multiple data sources (like Prometheus, InfluxDB, Elasticsearch) and provides a unified monitoring dashboard.

Compared to ELK Stack, Grafana’s stack tends to be more lightweight, easier to configure and extend, though it lacks the advanced query capabilities and full-text search of ELK Stack.

28. Q3: What advantages does ClickHouse offer in log and metric data storage and analysis?

A3: ClickHouse is a high-performance columnar database with the following advantages:

  • Efficient storage: Its columnar storage format is optimized for high compression rates, reducing storage costs.
  • Fast querying: Optimized for reading large volumes of data, especially useful for analytical queries and real-time analysis.
  • Scalability: Supports horizontal scaling, capable of handling petabyte-scale data.

ClickHouse’s high performance and compression make it an ideal choice for storing and analyzing log and metric data, particularly in scenarios requiring fast queries and large-scale data analysis.

29. Q4: How can a unified view of data be achieved in modern observability systems?

A4: A unified view of data can be achieved by:

  • Integrating different data sources: Use Grafana’s data source plugins to integrate different monitoring tools (like Prometheus, Elasticsearch, Loki, ClickHouse) into a single interface.
  • Data warehouse: Centralize data in a powerful data warehouse like ClickHouse to enable unified querying and analysis across all data.
  • APIs and data aggregation: Use APIs and data aggregation platforms to merge and analyze data from different tools, offering comprehensive views and insights.

30. Q5: How are current internet technology trends impacting observability systems?

A5: Current internet technology trends influence observability systems in the following ways:

  • Cloud-native and microservices: The adoption of cloud-native and microservice architectures increases the need for logs, metrics, and tracing data, driving the development of log management tools and distributed tracing systems.
  • Automation and intelligence: The growing demand for automated monitoring, fault detection, and self-healing systems encourages observability tools to integrate more machine learning and AI features.
  • Big data and real-time analysis: The need for real-time data analysis drives the development of high-performance databases (like ClickHouse) and stream processing technologies.
  • Data privacy and compliance: As data privacy concerns rise, observability systems are strengthening their support for data security and compliance.

31. Q6: How can high availability and disaster recovery be handled in observability systems?

A6: High availability and disaster recovery can be managed by:

  • Redundancy and backup: Configure redundant data storage and regular backups. In ELK Stack, Elasticsearch’s replication mechanism and snapshots ensure data redundancy. Grafana Loki achieves high availability through cluster mode and backup strategies.
  • Distributed deployment: Deploy systems across multiple data centers or cloud regions to ensure that if one region fails, others can take over.
  • Failover and recovery: Set up automatic failover mechanisms and disaster recovery plans to quickly restore system functionality and data.

32. Q7: What are the future trends in observability systems?

A7: Future trends in observability systems include:

  • Smarter analytics: More machine learning and AI features for automated anomaly detection and root cause analysis.
  • Seamless integration: Enhanced integration across different data sources, including logs, metrics, and traces, for a unified observability experience.
  • Cloud-native observability: Tools like Grafana’s stack and ELK Stack are increasingly optimized for cloud-native environments.
  • More efficient storage: Tools like ClickHouse are evolving to handle massive data volumes, providing fast querying and efficient data storage solutions.

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
phpstorm 配置 Xdebug 调试
对有有经验的程序员,使用 echo()、print_r ()、print_f ()、var_dump () 等函数足以调试 php 代码,如果你不喜欢这样的话,xdebug 就是一个非常好的 php 调试工具。
CRMEB商城源码
2022/05/31
2.1K0
phpstorm 配置 Xdebug 调试
phpstorm安装xdebug(phpstudy环境下)成功运行
想要在phpstorm中调试请满足一个条件。那就是安装在d盘。为什么呢?因为这样方便复制粘贴呀, 要开启phpstudy中的php扩展的xdebug啊
贵哥的编程之路
2022/09/23
3890
phpstorm安装xdebug(phpstudy环境下)成功运行
十分钟搞定mac下的phpstorm增加xdebug调试
xdebug 版本需要与php匹配,匹配地址 :https://xdebug.org/wizard.php
兔云小新LM
2019/07/22
4.8K0
十分钟搞定mac下的phpstorm增加xdebug调试
【VSCode插件】xdebug开发调试PHP
Xdebug 在开发过程中可以帮我们查看具体的运行和步骤,以及每行代码执行的结果,在学习和解决代码问题的时候可以提供非常大的便利。PHPStorm 也可以进行 Xdebug 调试,VScode 也可以进行配置调试,且比 PHPStorm 的配置简单很多,不用每次去创建一个 Server,再创建一个 web page 服务。相比之下,VSCode 的界面好看,且简单方便,值得学习一下。
程序小工
2018/09/12
12.3K0
【VSCode插件】xdebug开发调试PHP
php.ini Xdebug配置
xdebug.profiler_output_dir="D:\phpStudy\tmp\xdebug" xdebug.trace_output_dir="D:\phpStudy\tmp\xdebug" zend_extension="D:\iphpStudy\php\php-5.6.27-nts\ext\php_xdebug.dll"
Lansonli
2021/10/09
1.2K0
PHPStorm配置Xdebug
1.安装debug 2.检查phpinfo是否存在xdebug 3.修改/etc/php5/apache2/conf.d/xdebug.ini 内容如下: [xdebug] zend_extension=”/usr/lib/php5/20090626/xdebug.so” xdebug.idekey=”PHPSTORM” xdebug.remote_host=127.0.0.1 xdebug.remote_enable=on xdebug.remote_port = 9000 xdebu
苦咖啡
2018/05/08
1.2K0
PHPStorm 配置 xdebug(phpStudy/wamp)
PHPStorm 是一款功能强大的 PHP 开发工具,自动补全、格式化样式等,以及最主要的 XDebug 功能,是开发中非常有用的功能,能有效查看程序代码的问题所在,并了解程序的执行过程。
程序小工
2018/09/12
2.9K0
PHPStorm 配置 xdebug(phpStudy/wamp)
PHP+Xdebug+PhpStorm 实现断点调试Http接口
接口代码在调试时,经常是print_r或者var_dump来断点,但是当项目较为复杂的情况下,这么做效率就非常低下了,断点调试就非常好的解决了这个问题。一开始可能不太适应断点调试,但是当习惯之后,越用越舒服。
李昂君
2021/12/24
2.5K0
PHP+Xdebug+PhpStorm 实现断点调试Http接口
PhpStorm+Xdebug配置单步调试PHP
Xdebug是一款php调试插件,支持在Windows或Linux上进行远程调试。在php文件运行时,通过TCP协议发送调试信息到远程端口,IDE在收到调试信息时,可以向Xdebug发送单步运行,中止运行,运行等命令。此插件还可以进行代码覆盖率的统计,有兴趣的小伙伴可以一起交流。
用户5521279
2019/06/02
4.1K0
windows环境配置xdebug
发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/105874.html原文链接:https://javaforall.cn
全栈程序员站长
2022/08/09
7820
windows环境配置xdebug
PHPSTROM中xdebug调试配置和基本使用
当我们进行代码审计的时候,必不可少的就是debug。为了后面更加方便清晰的解释漏洞产生的原因和执行流程。特作此篇帮助像我一样的小白更好的学习!
黑白天安全
2021/03/16
1.1K0
PHPSTROM中xdebug调试配置和基本使用
windows环境配置xdebug
network_dream
2023/11/13
7090
windows环境配置xdebug
xdebug代码审计环境配置
下载地址:https://blog.jetbrains.com/phpstorm/
鸿鹄实验室
2021/05/27
7200
使用xdebug调试php详细教程
将phpinfo()中的信息全部复制到Xdebug中的文本框中,如下图所示:点击“Analyse my phpinfo() output”按钮。
CRMEB商城源码
2022/04/28
1.5K0
使用xdebug调试php详细教程
VS Code配置PHP XDebug
打debug还是很有必要的,以前嫌麻烦,现在觉得,通过debug可以看自己写的代码的执行的逻辑,更容易理清别人代码的逻辑。
Lansonli
2021/10/09
2.4K0
熟悉项目代码,必备的工具
注意要点: 1、Xdebug版本选择 在这里可以输出自己当前环境的phpinfo()信息,在这个网站上:
benny
2018/12/29
5030
代码审计基础之天钧式唠叨(二)
这个主要是输出变量的数据值,特别是数组合对象数据,一般我们在查看端口的返回值或者不确定的变量可以使用这两个API,debug_zval_dump类似,唯一增加的一个值是refcount,记录一个变量被引用了多少次
天钧
2019/07/25
4990
代码审计基础之天钧式唠叨(二)
【PHP】PHP开发必备配置 - Windows
把下载后的文件解包放到合适的位置,比如D:\Program Files\php-7.4.33-Win32-vc15-x86。
阿东
2023/02/23
6.6K0
【PHP】PHP开发必备配置 - Windows
phpstorm配置debug
本地php的开发环境是phpstudy。 将本地phpstudy增加了了php7.3版本,先去php官网下载对应的版本,在本地的对应的存放php版本的目录新建一个对应的php7.3的文件夹,将下载的php相关文件解压到该目录,复制php.ini-development并重命名为php.ini,打开,ctrl+F找到extension_dir,注释掉前面的分号,并填入本地的目录的绝对路径,如我的是:
槽痞
2020/06/23
1.4K0
phpstorm配置debug
mac php nginx mysql 环境搭建及eclipse/phpstorm xdebug
由于以前使用的都是windows对于新买的Mac系统异常不熟悉,所以安装环境碰到了许多小问题,这里做些总结。希望可以对以后安装的朋友有点用处。
solate
2019/07/19
1.6K0
相关推荐
phpstorm 配置 Xdebug 调试
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档