版本: 6.8-7.15
简而言之,当Elasticsearch机器学习(ML)作业由于ML节点上的容量不足而无法启动时,就会发生此错误。这可能是由于资源使用率过高或内存有限造成的。要解决此问题,您可以增加现有ML节点的内存容量,向集群中添加更多ML节点,或者减少ML作业的内存需求。此外,还要确保您的ML节点已正确配置,并且没有遇到任何网络或硬件问题。
日志上下文 #
日志 “Could not open job because no ML nodes with sufficient capacity were found” 的类名是 OpenJobPersistentTasksExecutor.java。我们从 Elasticsearch 源代码中提取了以下内容,供那些寻求深入了解的人参考:
static ElasticsearchException makeNoSuitableNodesException(Logger logger, String jobId, String explanation) {
String msg = "Could not open job because no suitable nodes were found; allocation explanation [" + explanation + "]";
logger.warn("[{}] {}", jobId, msg);
Exception detail = new IllegalStateException(msg);
return new ElasticsearchStatusException("Could not open job because no ML nodes with sufficient capacity were found",
RestStatus.TOO_MANY_REQUESTS, detail);
}
static ElasticsearchException makeAssignmentsNotAllowedException(Logger logger, String jobId) {
String msg = "Cannot open jobs because persistent task assignment is disabled by the [";





