本文主要研究一下PowerJob的StoreStrategy
tech/powerjob/worker/common/constants/StoreStrategy.java
@Getter
@AllArgsConstructor
public enum StoreStrategy {
DISK("磁盘"),
MEMORY("内存");
private final String des;
}
StoreStrategy枚举定义了DISK、MEMORY两个枚举
tech/powerjob/worker/persistence/ConnectionFactory.java
@Slf4j
public class ConnectionFactory {
private volatile DataSource dataSource;
private final String H2_PATH = PowerFileUtils.workspace() + "/h2/" + CommonUtils.genUUID() + "/";
private final String DISK_JDBC_URL = String.format("jdbc:h2:file:%spowerjob_worker_db;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false", H2_PATH);
private final String MEMORY_JDBC_URL = String.format("jdbc:h2:mem:%spowerjob_worker_db;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false", H2_PATH);
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
public synchronized void initDatasource(StoreStrategy strategy) {
// H2 兼容性问题较多,前置输出版本方便排查
log.info("[PowerDatasource] H2 database version: {}", JavaUtils.determinePackageVersion(Driver.class));
// 兼容单元测试,否则没办法单独测试 DAO 层了
strategy = strategy == null ? StoreStrategy.DISK : strategy;
HikariConfig config = new HikariConfig();
config.setDriverClassName(Driver.class.getName());
config.setJdbcUrl(strategy == StoreStrategy.DISK ? DISK_JDBC_URL : MEMORY_JDBC_URL);
config.setAutoCommit(true);
// 池中最小空闲连接数量
config.setMinimumIdle(2);
// 池中最大连接数量
config.setMaximumPoolSize(32);
dataSource = new HikariDataSource(config);
log.info("[PowerDatasource] init h2 datasource successfully, use url: {}", config.getJdbcUrl());
// JVM 关闭时删除数据库文件
try {
FileUtils.forceDeleteOnExit(new File(H2_PATH));
log.info("[PowerDatasource] delete worker db file[{}] on JVM exit successfully", H2_PATH);
}catch (Throwable t) {
log.warn("[PowerDatasource] delete file on JVM exit failed: {}", H2_PATH, t);
}
}
}
ConnectionFactory在initDatasource的时候会根据StoreStrategy的类型来设置H2的jdbcUrl,如果是DISK则jdbcUrl为
jdbc:h2:file:%spowerjob_worker_db;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false
,如果是MEMORY的话,jdbcUrl为jdbc:h2:mem:%spowerjob_worker_db;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false
;其中%s
为H2_PATH,取的是PowerFileUtils.workspace() + "/h2/" + CommonUtils.genUUID() + "/"
tech/powerjob/worker/common/utils/PowerFileUtils.java
@Slf4j
public class PowerFileUtils {
/**
* 获取工作目录
* @return 允许用户通过启动配置文件自定义存储目录,默认为 user.home
*/
public static String workspace() {
String workspaceByDKey = System.getProperty(PowerJobDKey.WORKER_WORK_SPACE);
if (StringUtils.isNotEmpty(workspaceByDKey)) {
log.info("[PowerFileUtils] [workspace] use custom workspace: {}", workspaceByDKey);
return workspaceByDKey;
}
final String userHome = System.getProperty("user.home").concat("/powerjob/worker");
log.info("[PowerFileUtils] [workspace] use user.home as workspace: {}", userHome);
return userHome;
}
}
workspace先从系统属性取
powerjob.worker.workspace
,若没有则取home目录下的powerjob/worker
目录
tech/powerjob/worker/autoconfigure/PowerJobProperties.java
@Setter
@Getter
public static class Worker {
//......
/**
* Protocol for communication between WORKER and server
*/
private Protocol protocol = Protocol.AKKA;
/**
* Local store strategy for H2 database. {@code disk} or {@code memory}.
*/
private StoreStrategy storeStrategy = StoreStrategy.DISK;
//......
}
PowerJobProperties.Worker的storeStrategy默认为DISK类型
PowerJob的StoreStrategy枚举定义了DISK、MEMORY两个枚举,默认为DISK类型,它配置的是H2数据库的jdbcUrl是使用file的还是memory的,file的默认path为home目录下的powerjob/worker
目录。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。