使用elasticsearch

编程入门 行业动态 更新时间:2024-10-28 00:15:52
使用elasticsearch-py记录(Logging using elasticsearch-py) python elasticsearch

我想记录使用elasticsearch-py python脚本。 特别是,我想要有三个日志:

一般日志:将 INFO及以上两者记录到stdout和文件。 ES日志:仅ES相关的消息仅适用于文件。 ES跟踪日志 :扩展的ES日志记录(卷曲查询及其输出)仅适用于文件。

这是我到目前为止:

import logging import logging.handlers es_logger = logging.getLogger('elasticsearch') es_logger.setLevel(logging.INFO) es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log', maxBytes=0.5*10**9, backupCount=3) es_logger.addHandler(es_logger_handler) es_tracer = logging.getLogger('elasticsearch.trace') es_tracer.setLevel(logging.DEBUG) es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log', maxBytes=0.5*10**9, backupCount=3) es_tracer.addHandler(es_tracer_handler) logger = logging.getLogger('mainLog') logger.setLevel(logging.DEBUG) # create file handler fileHandler = logging.handlers.RotatingFileHandler('top-camps.log', maxBytes=10**6, backupCount=3) fileHandler.setLevel(logging.INFO) # create console handler consoleHandler = logging.StreamHandler() consoleHandler.setLevel(logging.INFO) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') consoleHandler.setFormatter(formatter) fileHandler.setFormatter(formatter) # add the handlers to logger logger.addHandler(consoleHandler) logger.addHandler(fileHandler)

我的问题是es_logger INFO消息也显示在终端上。 事实上,日志消息被保存到正确的文件!

如果我移除了与logger相关的部分,那么ES记录工作正常,即只保存到相应的文件。 但后来我没有其他部分......我对这些设置的最后部分做错了什么?


编辑

可能的提示:在elasticsearch-py的源代码中,有一个名为logger的记录logger 。 难道它与我的冲突? 我试图在上面的行main_logger logger的名称更改为main_logger ,但它没有帮助。

可能的提示2:如果使用logger = logging.getLogger()替换logger = logging.getLogger('mainLog') ,则输出到es_logger控制台的输出格式es_logger发生变化,并与代码片段中定义的格式相同。

I would like to log my python script that uses elasticsearch-py. In particular, I want to have three logs:

General log: log INFO and above both to the stdout and to a file. ES log: only ES related messages only to a file. ES tracing log: Extended ES logging (curl queries and their output for instance) only to a file.

Here is what I have so far:

import logging import logging.handlers es_logger = logging.getLogger('elasticsearch') es_logger.setLevel(logging.INFO) es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log', maxBytes=0.5*10**9, backupCount=3) es_logger.addHandler(es_logger_handler) es_tracer = logging.getLogger('elasticsearch.trace') es_tracer.setLevel(logging.DEBUG) es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log', maxBytes=0.5*10**9, backupCount=3) es_tracer.addHandler(es_tracer_handler) logger = logging.getLogger('mainLog') logger.setLevel(logging.DEBUG) # create file handler fileHandler = logging.handlers.RotatingFileHandler('top-camps.log', maxBytes=10**6, backupCount=3) fileHandler.setLevel(logging.INFO) # create console handler consoleHandler = logging.StreamHandler() consoleHandler.setLevel(logging.INFO) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') consoleHandler.setFormatter(formatter) fileHandler.setFormatter(formatter) # add the handlers to logger logger.addHandler(consoleHandler) logger.addHandler(fileHandler)

My problem is that INFO messages of es_logger are displayed also on the terminal. As a matter of fact the log messages are saved to the right files!

If I remover the part related to logger, then the ES logging works fine, i.e. only saved to the corresponding files. But then I don't have the other part.... What is it that I'm doing wrong with the last part of the settings?


Edit

Possible hint: In the sources of elasticsearch-py there's a logger named logger. Could it be that it conflicts with mine? I tried to change the name of logger to main_logger in the lines above but it didn't help.

Possible hint 2: If I replace logger = logging.getLogger('mainLog') with logger = logging.getLogger(), then the format of the output to the console of es_logger changes and becomes identical to the one defined in the snippet.

最满意答案

我认为你正在受到有点令人困惑的记录器层级传播的影响。 通过记录器日志级别登录的所有“elasticsearch.trace”将首先传播到“elasticsearch”记录器,然后传播到根(“”)记录器。 请注意,一旦消息通过“elasticsearch.trace”记录器的日志级别,父级的日志级别(“elasticsearch”和root)就不会被检查,但所有消息都将被发送到处理程序。 (处理程序本身具有适用的日志级别。)

考虑下面的例子来说明这个问题,以及一个可能的解决方案:

import logging # The following line will basicConfig() the root handler logging.info('DUMMY - NOT SEEN') ll = logging.getLogger('foo') ll.setLevel('DEBUG') ll.addHandler(logging.StreamHandler()) ll.debug('msg1') ll.propagate = False ll.debug('msg2')

输出:

msg1 DEBUG:foo:msg1 msg2

您会看到“foo”记录器及其父记录器(如“DEBUG:foo:msg1”)记录了“msg1”。 然后,关闭传播时,在“msg2”之前ll.propagate = False ,根记录器不再记录它。 现在,如果要注释掉第一行( logging.info("DUMMY...") ,那么行为会发生变化,以便不会显示根记录器行。这是因为logging模块顶级函数info() , debug()等等在没有定义处理程序的情况下使用处理程序配置根记录程序,这也是为什么当您通过执行logger = logging.getLogger()修改根处理程序时,在您的示例中会看到不同的行为。

我在代码中看不到你会对根记录器做任何事情,但正如你所看到的,在你的代码或库代码中的一个零散的logging.info()或类似的东西会导致添加一个处理器。

因此,为了回答您的问题,我将logger.propagate = False设置为对您有意义的记录器以及您希望传播的位置,请检查处理程序本身的日志级别是否符合您的要求。

这是一个尝试:

es_logger = logging.getLogger('elasticsearch') es_logger.propagate = False es_logger.setLevel(logging.INFO) es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log', maxBytes=0.5*10**9, backupCount=3) es_logger.addHandler(es_logger_handler) es_tracer = logging.getLogger('elasticsearch.trace') es_tracer.propagate = False es_tracer.setLevel(logging.DEBUG) es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log', maxBytes=0.5*10**9, backupCount=3) es_tracer.addHandler(es_tracer_handler) logger = logging.getLogger('mainLog') logger.propagate = False logger.setLevel(logging.DEBUG) # create file handler fileHandler = logging.handlers.RotatingFileHandler('top-camps.log', maxBytes=10**6, backupCount=3) fileHandler.setLevel(logging.INFO) # create console handler consoleHandler = logging.StreamHandler() consoleHandler.setLevel(logging.INFO) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') consoleHandler.setFormatter(formatter) fileHandler.setFormatter(formatter) # add the handlers to logger logger.addHandler(consoleHandler) logger.addHandler(fileHandler)

I think you are being hit by the somewhat confusing logger hierarchy propagation. Everything that is logged in "elasticsearch.trace" that passes the loglevel of that logger, will propagate first to the "elasticsearch" logger and then to the root ("") logger. Note that once the message passes the loglevel of the "elasticsearch.trace" logger, the loglevels of the parents ("elasticsearch" and root) are not checked, but all messages will be sent to the handlers. (The handlers themselves have log levels that do apply.)

Consider the following example that illustrates the issue, and a possible solution:

import logging # The following line will basicConfig() the root handler logging.info('DUMMY - NOT SEEN') ll = logging.getLogger('foo') ll.setLevel('DEBUG') ll.addHandler(logging.StreamHandler()) ll.debug('msg1') ll.propagate = False ll.debug('msg2')

Output:

msg1 DEBUG:foo:msg1 msg2

You see that "msg1" is logged both by the "foo" logger, and its parent, the root logger (as "DEBUG:foo:msg1"). Then, when propagation is turned off ll.propagate = False before "msg2", the root logger no longer logs it. Now, if you were to comment out the first line (logging.info("DUMMY..."), then the behavior would change so that the root logger line would not be shown. This is because the logging module top level functions info(), debug() etc. configure the root logger with a handler when no handler has yet been defined. That is also why you see different behavior in your example when you modify the root handler by doing logger = logging.getLogger().

I can't see in your code that you would be doing anything to the root logger, but as you see, a stray logging.info() or the like in your code or library code would cause a handler to be added.

So, to answer your question, I would set logger.propagate = False to the loggers where it makes sense for you and where you want propagation, check that the log level of the handlers themselves are as you want them.

Here is an attempt:

es_logger = logging.getLogger('elasticsearch') es_logger.propagate = False es_logger.setLevel(logging.INFO) es_logger_handler=logging.handlers.RotatingFileHandler('top-camps-base.log', maxBytes=0.5*10**9, backupCount=3) es_logger.addHandler(es_logger_handler) es_tracer = logging.getLogger('elasticsearch.trace') es_tracer.propagate = False es_tracer.setLevel(logging.DEBUG) es_tracer_handler=logging.handlers.RotatingFileHandler('top-camps-full.log', maxBytes=0.5*10**9, backupCount=3) es_tracer.addHandler(es_tracer_handler) logger = logging.getLogger('mainLog') logger.propagate = False logger.setLevel(logging.DEBUG) # create file handler fileHandler = logging.handlers.RotatingFileHandler('top-camps.log', maxBytes=10**6, backupCount=3) fileHandler.setLevel(logging.INFO) # create console handler consoleHandler = logging.StreamHandler() consoleHandler.setLevel(logging.INFO) # create formatter and add it to the handlers formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') consoleHandler.setFormatter(formatter) fileHandler.setFormatter(formatter) # add the handlers to logger logger.addHandler(consoleHandler) logger.addHandler(fileHandler)

更多推荐

本文发布于:2023-04-29 12:29:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1336251.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:elasticsearch

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!