| |
- object
-
- TableRetriever
class TableRetriever(object) |
|
Object for retrieving an entire table from an SNMP agent
This is the (loose) equivalent of the SNMPWalk examples in
pysnmp. It also includes the code for the SNMPBulkWalk, which
is a generalisation of the SNMPWalk code. |
|
Methods defined here:
- __call__(self, recordCallback=None, startOIDs=None)
- Collect results, call recordCallback for each retrieved record
recordCallback -- called for each new record discovered
startOIDs -- optional OID markers to be used as starting point,
i.e. if passed in, we retrieve the table from startOIDs to
the end of the table.
Will use bulk downloading when available (i.e. if
we have implementation v2c, not v1) and self.bulk is true.
return value is a defered for a { rootOID: { oid: value } } mapping
- __init__(self, proxy, roots, includeStart=0, retryCount=4, timeout=2.0, maxRepetitions=128)
- Initialise the retriever
proxy -- the AgentProxy instance we want to use for
retrieval of the data
roots -- root OIDs to retrieve
includeStart -- whether to include the starting OID
in the set of results, by default, return the OID
*after* the root oids
retryCount -- number of retries
timeout -- initial timeout, is multipled by 1.5 on each
timeout iteration.
maxRepetitions -- max records to request with a single
bulk request
- areWeDone(self, response, roots, request, recordCallback=None)
- Callback which checks to see if we're done
if not, passes on request & schedules next iteration
if so, returns None
- getTable(self, oids=None, roots=None, includeStart=0, retryCount=None, delay=None, firstCall=False)
- Retrieve all sub-oids from these roots
recordCallback -- called for *each* OID:value pair retrieved
recordCallback( root, oid, value )
includeStart -- at the moment, only implemented for v1 protocols,
ignored for v2c. Should likely be avoided entirely. Would
be implemented with a seperate get call anyway, which may as
well be explicitly coded when you want it.
firstCall -- whether this is the first call, if it is, and we
allow caching, we'll ask the proxy to cache our encoded
request. We don't cache continuations because they will
be different depending on where the iteration happens to
break.
This is the "walk" example from pysnmp re-cast...
- integrateNewRecord(self, oidValues, rootOIDs)
- Integrate a record-set into the table
This method is quite simplistic in its approach, it
just checks for each value in oidValues if it is a
child or a root in rootOIDs, and if it is, adds it to
the result-set for that root. This approach is a
little more robust than the previous one, which used
the standard's rather complex mechanism for mapping
root:oid, and was resulting in some very strange results
in certain testing situations.
- scheduleIntegrate(self, oidValues, rootOIDs)
- Schedule integration of oidValues into this table's results
This breaks up the process so that we can process other events
before we do the (heavy) work of integrating the result-table...
- tableTimeout(self, df, key, oids, roots, includeStart, retryCount, delay)
- Table timeout implementation
Table queries timeout if a single retrieval
takes longer than retryCount * self.timeout
Data and other attributes defined here:
- __dict__ = <dictproxy object>
- dictionary for instance variables (if defined)
- __weakref__ = <attribute '__weakref__' of 'TableRetriever' objects>
- list of weak references to the object (if defined)
- bulk = 1
- finished = 0
| |