OWFS stores recent measurements, and only updates them every 15 seconds or so, unless you ask specifically for fresh ("uncached") measurement. This makes the system appear faster, and often works well (how fast does the temperature really change?) but can be confusing.
You're reading temperatures and you get abrupt changes. What's happening?
OWFS, by default, will "cache" some values, rather than rereading. You can get around this easily by reading from the "uncached" directory.
Forcing a measurement
A new measurement will (or new reading of device contents) will be made whenever:
- A reading from the "uncached" directory is requested.
- The old data has expired.
- Some obscure invalidating conditions (Requesting simultaneous conversion, writing single elements of aggregate data, ...)
- Thing changed externally, like temperature, voltages, electrical contacts...
- 15 second default cache timeout
- Values you set (memory contents, internal flags, switch settings)
- Writing to this value updates the cache
- 300 second (5 minute) default timeout
- List of devices connected to the 1-wire bus
- 60 second (1 minute) default timeout
- Location of a device (which of several bus)
- 120 second (2 minute) default timeout
Static data / Statistics / Settings / Internal states
- Generated "on-the-fly" by OWFS
- Examples include device names, addresses, CRC8, property lists, statistics
- Never cached -- no advantage over recalculating
Changing timeouts -- command line
~/owfs> owhttpd --help=cache
1-WIRE access programs by Paul H Alfille and others.
Cache and Timing Help
--cache_size n Size in bytes of max cache memory. 0 for no limit.
Cache timing [default] (in seconds)
--timeout_volatile [ 15] Expiration time for changing data (e.g. temperature)
--timeout_stable  Expiration time for stable data (e.g. temperature limit)
--timout_directory [ 60] Expiration of directory lists
--timeout_presence  Expiration of known 1-wire device location
Communication timing [default] (in seconds)
--timeout_serial [ 5] Timeout for serial port read
--timeout_usb [ 5] Timeout for USB transaction
--timeout_network [ 1] Timeout for each network transaction
--timeout_server [ 10] Timeout for first server connection
--timeout_ftp  Timeout for FTP session
Copyright 2003 GPLv2. See http://www.owfs.org for support, downloads
Changing timeouts -- within the program
The timeouts are read/writable values found under /settings/timeout/[volatile|stable|directory|presence]
For example, using owhttpd:
- Can caching be turned off?
Yes, at compile time (DISABLE_CACHE) or by setting the timeout to 0 at runtime.
- Can caching memory be bounded?
Yes, with the cache_size command line argument
- How much memory does each entry use?
Not much. Depends on computer architecture (32 bit vs 64 bit) but about 32 bytes for the header and then data element size.
- What happens when memory is exhuasted?
In theory OWFS will degrade gracefully since caching is optional. In practice, other memory allocations will also start failing, which may be awkward.
- Is there a natural bound to memory usage for the cache?
Probably, based on the physical limits of the 1-wire bus. Perhaps 2K bytes/second * 3600 second timeout * number of buses. This assumes 1000s of physical 1-wire devices. Even so, the memory size is trivial for modern machines.
Tests on Caching -- single process
Here is the time taken to read a bank of 10 DS18S20 temperature sensors in succession (several times).
After the first 10 readings, the uncached process only rarely needs to delay -- only when the time runs out. Uncached takes far longer.
What does this mean?
- The DS18S20 is particularly slow at full resolution (~1second/sample)
- Caching essentially "decouples" your program from the need to worry about sampling time. You can read the temperature whenever you want, and have a delay only when it's time for a new reading.
Tests on Caching -- two processes
Let's add two concurrent processes, each trying the same task as before -- reading 10 temperature sensors in a loop.
- Uncached contention is at least additive in time. So the elapsed time will double for each process.
- Cached dual processes is very fast, but slightly twice the elapsed time of the single. Either this is caused purely by bus locking and network traffic, or the slight extra work pushed us over the volatile time threshold.
The actual design of the cache is quite clever, and well tuned. Briefly, data is keyed by device and "property" in a red-black tree. See Cache design for implementation details. Access is fast (binary search in main memory) and the table is regularly purged of expired data.