前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >预训练语言模型(PLM)必读论文清单(附论文PDF、源码和模型链接)

预训练语言模型(PLM)必读论文清单(附论文PDF、源码和模型链接)

作者头像
数据派THU
发布于 2019-10-10 03:39:19
发布于 2019-10-10 03:39:19
1.7K0
举报
文章被收录于专栏:数据派THU数据派THU
本文介绍清华大学NLP给出的预训练语言模型必读论文清单,包含论文的PDF链接、源码和模型等。

[ 导读 ]近两年来,ELMO、BERT等预训练语言模型(PLM)在多项任务中刷新了榜单,引起了学术界和工业界的大量关注。

清华大学NLP在Github项目thunlp/PLMpapers中提供了预训练语言模型必读论文清单,包含了论文的PDF链接、源码和模型等,具体清单如下:

模型:

Deep contextualized word representations. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee and Luke Zettlemoyer. NAACL 2018.

  • 论文:

https://arxiv.org/pdf/1802.05365.pdf

  • 工程:

https://allennlp.org/elmo (ELMo)

Universal Language Model Fine-tuning for Text Classification. Jeremy Howard and Sebastian Ruder. ACL 2018.

  • 论文:

https://www.aclweb.org/anthology/P18-1031

  • 工程:

http://nlp.fast.ai/category/classification.html (ULMFiT)

Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. Preprint.

  • 论文:

https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf

  • 工程:

https://openai.com/blog/language-unsupervised/ (GPT)

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. NAACL 2019.

  • 论文: https://arxiv.org/pdf/1810.04805.pdf
  • 代码+模型: https://github.com/google-research/bert

Language Models are Unsupervised Multitask Learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. Preprint.

  • 论文: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
  • 代码: https://github.com/openai/gpt-2 (GPT-2)

ERNIE: Enhanced Language Representation with Informative Entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun and Qun Liu. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1139
  • 代码+模型: https://github.com/thunlp/ERNIE (ERNIE (Tsinghua) )

ERNIE: Enhanced Representation through Knowledge Integration. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian and Hua Wu. Preprint.

  • 论文: https://arxiv.org/pdf/1904.09223.pdf
  • 代码: https://github.com/PaddlePaddle/ERNIE/tree/develop/ERNIE (ERNIE (Baidu) )

Defending Against Neural Fake News. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi. NeurIPS.

  • 论文: https://arxiv.org/pdf/1905.12616.pdf
  • 工程: https://rowanzellers.com/grover/ (Grover)

Cross-lingual Language Model Pretraining. Guillaume Lample, Alexis Conneau. NeurIPS2019.

  • 论文: https://arxiv.org/pdf/1901.07291.pdf
  • 代码+模型: https://github.com/facebookresearch/XLM (XLM)

Multi-Task Deep Neural Networks for Natural Language Understanding. Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1441
  • 代码+模型: https://github.com/namisan/mt-dnn (MT-DNN)

MASS: Masked Sequence to Sequence Pre-training for Language Generation. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. ICML2019.

  • 论文: https://arxiv.org/pdf/1905.02450.pdf
  • 代码+模型: https://github.com/microsoft/MASS

Unified Language Model Pre-training for Natural Language Understanding and Generation. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon. Preprint.

  • 论文: https://arxiv.org/pdf/1905.03197.pdf (UniLM)

XLNet: Generalized Autoregressive Pretraining for Language Understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. NeurIPS2019.

  • 论文: https://arxiv.org/pdf/1906.08237.pdf
  • 代码+模型: https://github.com/zihangdai/xlnet

RoBERTa: A Robustly Optimized BERT Pretraining Approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. Preprint.

  • 论文: https://arxiv.org/pdf/1907.11692.pdf
  • 代码+模型: https://github.com/pytorch/fairseq

SpanBERT: Improving Pre-training by Representing and Predicting Spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy. Preprint.

  • 论文: https://arxiv.org/pdf/1907.10529.pdf
  • 代码+模型: https://github.com/facebookresearch/SpanBERT

Knowledge Enhanced Contextual Word Representations. Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1909.04164.pdf (KnowBert)

VisualBERT: A Simple and Performant Baseline for Vision and Language. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. Preprint.

  • 论文: https://arxiv.org/pdf/1908.03557.pdf
  • 代码+模型: https://github.com/uclanlp/visualbert

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee. NeurIPS.

  • 论文: https://arxiv.org/pdf/1908.02265.pdf
  • 代码+模型: https://github.com/jiasenlu/vilbert_beta

VideoBERT: A Joint Model for Video and Language Representation Learning. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid. ICCV2019.

  • 论文: https://arxiv.org/pdf/1904.01766.pdf

LXMERT: Learning Cross-Modality Encoder Representations from Transformers. Hao Tan, Mohit Bansal. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1908.07490.pdf
  • 代码+模型: https://github.com/airsplay/lxmert

VL-BERT: Pre-training of Generic Visual-Linguistic Representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. Preprint.

  • 论文: https://arxiv.org/pdf/1908.08530.pdf

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou. Preprint.

论文: https://arxiv.org/pdf/1908.06066.pdf

K-BERT: Enabling Language Representation with Knowledge Graph. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, Ping Wang. Preprint.

论文: https://arxiv.org/pdf/1909.07606.pdf

Fusion of Detected Objects in Text for Visual Question Answering. Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter. EMNLP2019.

论文: https://arxiv.org/pdf/1908.05054.pdf (B2T2)

Contrastive Bidirectional Transformer for Temporal Representation Learning. Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid. Preprint.

论文: https://arxiv.org/pdf/1906.05743.pdf (CBT)

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang. Preprint.

  • 论文: https://arxiv.org/pdf/1907.12412v1.pdf
  • 代码: https://github.com/PaddlePaddle/ERNIE/blob/develop/README.md

75 Languages, 1 Model: Parsing Universal Dependencies Universally. Dan Kondratyuk, Milan Straka. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1904.02099.pdf
  • 代码+模型: https://github.com/hyperparticle/udify (UDify)

Pre-Training with Whole Word Masking for Chinese BERT. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu. Preprint.

  • 论文: https://arxiv.org/pdf/1906.08101.pdf
  • 代码+模型: https://github.com/ymcui/Chinese-BERT-wwm/blob/master/README_EN.md (Chinese-BERT-wwm)

知识蒸馏和模型压缩:

TinyBERT: Distilling BERT for Natural Language Understanding. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu.

论文: https://arxiv.org/pdf/1909.10351v1.pdf

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin. Preprint.

论文: https://arxiv.org/pdf/1903.12136.pdf

Patient Knowledge Distillation for BERT Model Compression. Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1908.09355.pdf
  • 代码: https://github.com/intersun/PKD-for-BERT-Model-Compression

Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System. Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang. Preprint.

  • 论文: https://arxiv.org/pdf/1904.09636.pdf

PANLP at MEDIQA 2019: Pre-trained Language Models, Transfer Learning and Knowledge Distillation. Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, Guotong Xie. The 18th BioNLP workshop.

  • 论文: https://www.aclweb.org/anthology/W19-5040

Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao. Preprint.

  • 论文: https://arxiv.org/pdf/1904.09482.pdf
  • 代码+模型: https://github.com/namisan/mt-dnn

Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. Preprint.

  • 论文: https://arxiv.org/pdf/1908.08962.pdf

Small and Practical BERT Models for Sequence Labeling. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, Amelia Archer. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1909.00100.pdf

Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer. Preprint.

  • 论文: https://arxiv.org/pdf/1909.05840.pdf

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Anonymous authors. ICLR2020 under review.

  • 论文: https://openreview.net/pdf?id=H1eA7AEtvS

分析:

Revealing the Dark Secrets of BERT. Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky. EMNLP2019.

  • 论文: https://arxiv.org/abs/1908.08593

How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations. Betty van Aken, Benjamin Winter, Alexander Löser, Felix A. Gers. CIKM2019.

  • 论文: https://arxiv.org/pdf/1909.04925.pdf

Are Sixteen Heads Really Better than One?. Paul Michel, Omer Levy, Graham Neubig. Preprint.

  • 论文: https://arxiv.org/pdf/1905.10650.pdf
  • 代码: https://github.com/pmichel31415/are-16-heads-really-better-than-1

Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits. Preprint.

  • 论文: https://arxiv.org/pdf/1907.11932.pdf
  • 代码: https://github.com/jind11/TextFooler

BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model. Alex Wang, Kyunghyun Cho. NeuralGen2019.

  • 论文: https://arxiv.org/pdf/1902.04094.pdf
  • 代码: https://github.com/nyu-dl/bert-gen

Linguistic Knowledge and Transferability of Contextual Representations. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith. NAACL2019.

  • 论文: https://www.aclweb.org/anthology/N19-1112

What Does BERT Look At? An Analysis of BERT's Attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning. BlackBoxNLP2019.

  • 论文: https://arxiv.org/pdf/1906.04341.pdf
  • 代码: https://github.com/clarkkev/attention-analysis

Open Sesame: Getting Inside BERT's Linguistic Knowledge. Yongjie Lin, Yi Chern Tan, Robert Frank. BlackBoxNLP2019.

  • 论文: https://arxiv.org/pdf/1906.01698.pdf
  • 代码: https://github.com/yongjie-lin/bert-opensesame

Analyzing the Structure of Attention in a Transformer Language Model. Jesse Vig, Yonatan Belinkov. BlackBoxNLP2019.

  • 论文: https://arxiv.org/pdf/1906.04284.pdf

Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains. Samira Abnar, Lisa Beinborn, Rochelle Choenni, Willem Zuidema. BlackBoxNLP2019.

  • 论文: https://arxiv.org/pdf/1906.01539.pdf

BERT Rediscovers the Classical NLP Pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1452

How multilingual is Multilingual BERT?. Telmo Pires, Eva Schlinger, Dan Garrette. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1493

What Does BERT Learn about the Structure of Language?. Ganesh Jawahar, Benoît Sagot, Djamé Seddah. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1356

Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT. Shijie Wu, Mark Dredze. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1904.09077.pdf

How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. Kawin Ethayarajh. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1909.00512.pdf

Probing Neural Network Comprehension of Natural Language Arguments. Timothy Niven, Hung-Yu Kao. ACL2019.

  • 论文: https://www.aclweb.org/anthology/P19-1459
  • 代码: https://github.com/IKMLab/arct2

Universal Adversarial Triggers for Attacking and Analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1908.07125.pdf
  • 代码: https://github.com/Eric-Wallace/universal-triggers

The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. Elena Voita, Rico Sennrich, Ivan Titov. EMNLP2019.

论文: https://arxiv.org/pdf/1909.01380.pdf

Do NLP Models Know Numbers? Probing Numeracy in Embeddings. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner. EMNLP2019.

论文: https://arxiv.org/pdf/1909.07940.pdf

Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretič, Samuel R. Bowman. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1909.02597.pdf
  • 代码: https://github.com/alexwarstadt/data_generation

Visualizing and Understanding the Effectiveness of BERT. Yaru Hao, Li Dong, Furu Wei, Ke Xu. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1908.05620.pdf

Visualizing and Measuring the Geometry of BERT. Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg. NeurIPS2019.

  • 论文: https://arxiv.org/pdf/1906.02715.pdf

On the Validity of Self-Attention as Explanation in Transformer Models. Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Roger Wattenhofer. Preprint.

  • 论文: https://arxiv.org/pdf/1908.04211.pdf

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel. Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1908.11775.pdf

Language Models as Knowledge Bases? Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel. EMNLP2019.

  • 论文: https://arxiv.org/pdf/1909.01066.pdf
  • 代码: https://github.com/facebookresearch/LAMA

参考链接:

https://github.com/thunlp/PLMpapers

-END-

编辑:王菁

校对:林亦霖

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2019-10-02,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 数据派THU 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
新一代基于大数据的管理信息系统(MIS)报表需求开发
2.然后通过将数据在后台(主要使用java)封装成前台需要的格式(一般是json格式),这一步中包含了service,DAO,spring配置以及使用,struts2的配置以及使用,如sql部分过于复杂还需要使用到Mybatis的配置和使用。当然还有业务逻辑方法的代码编写。
MickyInvQ
2020/09/27
1.8K0
新一代基于大数据的管理信息系统(MIS)报表需求开发
源码分析 | Spring定时任务Quartz执行全过程源码解读
在日常开发中经常会用到定时任务,用来;库表扫描发送MQ、T+n账单结算、缓存数据更新、秒杀活动状态变更,等等。因为有了Spring的Schedule极大的方便了我们对这类场景的使用。那么,除了应用你还了解它多少呢;
小傅哥
2020/07/14
1.6K0
纳税服务系统六(信息发布管理模块)【Ueditor、异步信息交互、抽取BaseService、条件查询、分页】
需求分析 我们现在来到了纳税服务系统的信息发布管理模块,首先我们跟着原型图来进行需求分析把: 一些普通的CRUD,值得一做的就是状态之间的切换了。停用和发布切换。 值得注意的是:在信息内容中,它可以带
Java3y
2018/04/02
1.9K0
纳税服务系统六(信息发布管理模块)【Ueditor、异步信息交互、抽取BaseService、条件查询、分页】
Java+Spring 实现任务调度Quartz框架(纯Java实现+Spring实现) 读写Excel
开发工具: Myelicps2014 这里选用Quartz 的1.8.6版本,此版本在项目中应用较多,也较为稳定
Java_慈祥
2024/08/06
1610
Java+Spring 实现任务调度Quartz框架(纯Java实现+Spring实现) 读写Excel
​分布式定时任务框架Quartz
项目中总要写那么几个定时任务来处理一些事情。一些简单的定时任务使用Spring自带的定时任务就能完成。但是如果需要大量的定时任务的话要怎么才能统一管理呢?
乱敲代码
2020/05/21
4.8K0
Spring 集成Quartz
在使用jdk的timer时发现无法在指定的日期进行执行任务。这便引入一个优秀的开源任务调度框架“quartz”。这里使用的是quartz-1.8.6版本。Quart的官网:http://www.quartz-scheduler.org/;spring 3.0版本无法集成quartz 2.x及其后续版本。
HUC思梦
2020/09/03
7830
Spring 集成Quartz
玩转 SSH(七):使用 dubbo + zookeeper 实现服务模块化
一、创建 SSMVCAnnoDemo 项目 点击菜单,选择“File -> New Project” 创建新项目。选择使用 archetype 中的 maven-quickstart 模版创建。 输入
陈树义
2018/04/13
1.2K0
玩转 SSH(七):使用 dubbo + zookeeper 实现服务模块化
Quartz任务调度器
这里加入的是quartz-1.8.6版本。Quart的官网:http://www.quartz-scheduler.org/;项目中的框架的spring是spring 3.0版本无法集成quartz 2.x及其后续版本;所以这里用quartz 1.8.6版本。
intsmaze-刘洋
2018/08/29
1.2K0
Quartz任务调度器
ASP.NET MVC5+EF6+EasyUI 后台管理系统(85)-Quartz 作业调度用法详解二
前言 上一节我们学习了Quartz的基本用法 这一节学习通过XML配置的形式来执行任务 这一节主要认识一些属性,为下一步打基础 代码下载:链接:http://pan.baidu.com/s/1
用户1149182
2018/01/12
6410
ASP.NET MVC5+EF6+EasyUI 后台管理系统(85)-Quartz 作业调度用法详解二
Quartz定时任务框架使用教程详解
Quartz是OpenSymphony开源组织在Job scheduling领域又一个开源项目,完全由Java开发,可以用来执行定时任务,类似于java.util.Timer。但是相较于Timer, Quartz增加了很多功能:
大忽悠爱学习
2021/12/07
2.3K0
Quartz定时任务框架使用教程详解
Spring Boot整合Scheduled定时任务器、整合Quartz定时任务框架
首先说明一下,这里使用的是Springboot2.2.6.RELEASE版本,由于Springboot迭代很快,所以要注意版本问题。
别先生
2020/05/27
1.2K0
Spring Boot整合Scheduled定时任务器、整合Quartz定时任务框架
基于Quartz编写一个可复用的分布式调度任务管理WebUI组件
创业小团队,无论选择任何方案,都优先考虑节省成本。关于分布式定时调度框架,成熟的候选方案有XXL-JOB、Easy Scheduler、Light Task Scheduler和Elastic Job等等,其实这些之前都在生产环境使用过。但是想要搭建高可用的分布式调度平台,这些框架(无论是否去中心化)都需要额外的服务器资源去部署中心调度管理服务实例,甚至有时候还会依赖一些中间件如Zookeeper。回想之前花过一段时间看Quartz的源码去分析它的线程模型,想到了它可以基于MySQL,通过一个不是很推荐的X锁方案(SELECT FOR UPDATE加锁)实现服务集群中单个触发器只有一个节点(加锁成功的那个节点)能够执行,这样子,就能够仅仅依赖于现有的MySQL实例资源实现分布式调度任务管理。一般来说,有关系型数据保存需求的业务应用都会有自己的MySQL实例,这样子就能几乎零成本引入一个分布式调度管理模块。某个加班的周六下午敲定了初步方案之后,花了几个小时把这个轮子造出来了,效果如下:
Throwable
2020/06/23
8510
Spring+Quartz实现定时任务 (二)
在我们进行软件项目开发的过程中,相信大家在很多时候都会遇到如下业务场景:每天、每周或每月生成相应的业务报表;每天统计系统注册人数;定期清理平台长久不登录的用户等等。遇到这种业务场景需要怎样去处理?人为定时去数据库操作来统计?别开玩笑了,这种事情哪用得着人来做,如果像这种任务还需要专人每天都去做统计,那估计很多人就要疯掉了。针对于这种业务情况,采用定时任务是个非常不错的选择。在Java领域中,定时任务的开源工具也非常多,小到一个Timer类,大到Quartz框架。总体来说,个人比较喜欢的还是Quartz,功能
Java学习123
2018/05/16
9520
java 项目日志管理设计方案[通俗易懂]
自定义注解主要包括模块名称、操作内容两个内容,其使用方式为:@LogAnnotation(moduleName = “角色管理”, operate = “新增角色”) 如果需要其他内容,可根据以下源码进行扩展
全栈程序员站长
2022/08/31
1.6K0
quartz使用入门篇【面试+工作】
你曾经需要应用执行一个任务吗?这个任务每天或每周星期二晚上11:30,或许仅仅每个月的最后一天执行。一个自动执行而无须干预的任务在执行过程中如果发生一个严重错误,应用能够知到其执行失败并尝试重新执行吗?你和你的团队是用Java编程吗?如果这些问题中任何一个你回答是,那么你应该使用Quartz调度器。
Java帮帮
2018/07/26
2K0
quartz使用入门篇【面试+工作】
Quartz任务调度快速入门
Quartz对任务调度的领域问题进行了高度的抽象,提出了调度器、任务和触发器这3个核心的概念,并在org.quartz通过接口和类对重要的这些核心概念进行描述:
MonroeCode
2018/01/09
1.4K0
Quartz任务调度快速入门
十分钟用 Python 绘制动态排行图 —— 以 A 股历年市值前十股票排行榜为例
相信大家都曾在 YouTube 和 B 站看到过类似的视频,这种图在国外叫做 Bar Chart Race,配上一段气势磅礴的 BGM,就会营造出一种「浮沉跌宕」的沉浸感,这类型的视频很多都获得了相当可观的播放量。
davidac
2021/07/11
1.3K0
day32_Hibernate学习笔记_04
  缓存(Cache):是计算机领域非常通用的概念。它介于应用程序和永久性数据存储源(如硬盘上的文件或者数据库)之间,其作用是降低应用程序直接读写硬盘(永久性数据存储源)的频率,从而提高应用的运行性能。缓存中的数据是数据存储源中数据的拷贝。缓存的物理介质通常是内存。   缓存:程序 <-- (内存) --> 硬盘
黑泽君
2018/10/11
9940
day32_Hibernate学习笔记_04
深入Quartz,优雅地管理你的定时任务
最近在工作遇到了定时任务场景,因此特地对定时任务相关知识进行了调研,记录在此,后文中使用的代码已经上传到Github: https://github.com/ThinkMugz/springboot-demo-major,需要的伙伴儿自取。
云深i不知处
2022/05/11
4.7K0
深入Quartz,优雅地管理你的定时任务
宿舍管理系统部分代码实现
public class DBHelper { private String dbUrl="jdbc:mysql://localhost:3306/sushe"; private String dbUser="root"; private String dbPassword="123456"; private String jdbcName="com.mysql.jdbc.Driver"; public Connection getConn(){ Connection conn = null; try{ Class.forName(jdbcName); } catch(Exception e){} try{ conn=DriverManager.getConnection(dbUrl,dbUser,dbPassword); } catch(SQLException ex){} return conn; } public static void main(String[] args) { System.out.println(new DBHelper().getConn());
张哥编程
2024/12/22
1050
推荐阅读
相关推荐
新一代基于大数据的管理信息系统(MIS)报表需求开发
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档