Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Moderators: Developers, Moderators
Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Cacti v 1.2.3 on linux
NET-SNMP version: 5.7.2
RRDtool Version Found: 1.4.8
Devices: 442
Graphs: 5,405
SPINE: 1.2.3
Last Run Statistics: Time:38.4646 Method:spine Processes:4 Threads:4 Hosts:442 HostsPerProcess:111 DataSources:14052 RRDsProcessed:5860
all my old graphs are working fine, updating with current data.
*SOME* new graphs are kicking the 'poller may not have run yet' error, specifically all new graphs of source type 'Interface - traffic' 'In/Out Bits' graphs. Interface error graphs created at the same time are displaying correctly.
SO: interface traffic graphs - not working. interface error graphs - working fine.
poller is running and collecting data on interfaces that do not show a graph.
NET-SNMP version: 5.7.2
RRDtool Version Found: 1.4.8
Devices: 442
Graphs: 5,405
SPINE: 1.2.3
Last Run Statistics: Time:38.4646 Method:spine Processes:4 Threads:4 Hosts:442 HostsPerProcess:111 DataSources:14052 RRDsProcessed:5860
all my old graphs are working fine, updating with current data.
*SOME* new graphs are kicking the 'poller may not have run yet' error, specifically all new graphs of source type 'Interface - traffic' 'In/Out Bits' graphs. Interface error graphs created at the same time are displaying correctly.
SO: interface traffic graphs - not working. interface error graphs - working fine.
poller is running and collecting data on interfaces that do not show a graph.
- Attachments
-
- Capturede.JPG (42.9 KiB) Viewed 1346 times
-
- Capturedd.JPG (96.43 KiB) Viewed 1346 times
Last edited by cheinzle on Tue Mar 26, 2024 7:28 am, edited 2 times in total.
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
check permission for rra directory. Can poller user create here new files?
Let the Cacti grow!
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Wouldn't a permissions issue also keep other new graphs/rra's from being created?
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Any ideas on this one?
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Run and show output:
sudo -u your_user_who_started_spine /path/to/spine --first=XX --last=XX -V=3 -R
XX = problematic device ID.
sudo -u your_user_who_started_spine /path/to/spine --first=XX --last=XX -V=3 -R
XX = problematic device ID.
Let the Cacti grow!
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Code: Select all
[myusername@io bin]$ sudo /usr/local/spine/bin/spine -f=110 -l=110 -V=3 -R
SPINE: Using spine config file [../etc/spine.conf]
SPINE: Version 1.2.3 starting
PHP Notice: Constant FILTER_VALIDATE_IS_REGEX already defined in /usr/share/cacti/include/global_constants.php on line 388
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_ARRAY already defined in /usr/share/cacti/include/global_constants.php on line 389
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_LIST already defined in /usr/share/cacti/include/global_constants.php on line 390
PHP Notice: Constant FILTER_VALIDATE_IS_REGEX already defined in /usr/share/cacti/include/global_constants.php on line 388
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_ARRAY already defined in /usr/share/cacti/include/global_constants.php on line 389
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_LIST already defined in /usr/share/cacti/include/global_constants.php on line 390
2024/03/27 07:29:05 - SPINE: Poller[1] NOTE: Spine will support multithread device polling.
2024/03/27 07:29:05 - SPINE: Poller[1] DEBUG: Initial Value of Active Threads is 0
2024/03/27 07:29:05 - SPINE: Poller[1] SPINE: Active Threads is 1, Pending is 1
2024/03/27 07:29:05 - SPINE: Poller[1] Updating Full System Information Table
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 1018138663 < output: 1018160554)
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] NOTE: There are '30' Polling Items for this Device
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[4736] SCRIPT: php /usr/share/cacti/scripts/cisco_eigrp_peer_uptime.php '10.74.249.87' 'readonly', output: [0]:2808 [1]:264
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2191] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.13, value: 19733217660
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2191] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.13, value: 21367012385
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.13, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.13, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.13, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.13, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10673] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1, value: 27547803634
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10673] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.1, value: 33005316804
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10674] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.3, value: 59846384660
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10674] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.3, value: 53682786217
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10675] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.6, value: 20595455110
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10675] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.6, value: 22698755545
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.1, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.1, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.1, value: 2
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.1, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.3, value: 0
2024/03/27 07:29:05 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.3, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.3, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.3, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.6, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.6, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.6, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.6, value: 0
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[967] SS[0] SERVER: /usr/share/cacti/scripts/ss_hstats.php ss_hstats '110' uptime, output: 1018138657
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[965] SNMP: v2: 10.74.249.87, dsname: 5min_cpu, oid: .1.3.6.1.4.1.9.2.1.58.0, value: 4
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[1771] SNMP: v2: 10.74.249.87, dsname: temp_generic, oid: .1.3.6.1.4.1.9.9.13.1.3.1.3.2, value: 19
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[1772] SNMP: v2: 10.74.249.87, dsname: mem_free, oid: .1.3.6.1.4.1.9.9.48.1.1.1.6.1, value: 241543996
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] DS[1773] SNMP: v2: 10.74.249.87, dsname: mem_used, oid: .1.3.6.1.4.1.9.9.48.1.1.1.5.1, value: 87690320
2024/03/27 07:29:06 - SPINE: Poller[1] Device[110] HT[1] Total Time: 0.58 Seconds
2024/03/27 07:29:06 - SPINE: Poller[1] POLLER: Active Threads is 0, Pending is 0
2024/03/27 07:29:06 - SPINE: Poller[1] SPINE: The Final Value of Threads is 0
2024/03/27 07:29:06 - SPINE: Poller[1] Time: 1.2343 s, Threads: 4, Devices: 1
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
So poller gets correct data. Try run it without -R (read only)
Let the Cacti grow!
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
It appears to be the same output. For this issue, DS's 10673, 10674, 10675 are the ones having issue, even though these are getting good data from the router in question.
Code: Select all
[myusername@io bin]$ sudo /usr/local/spine/bin/spine -f=110 -l=110 -V=3
SPINE: Using spine config file [../etc/spine.conf]
SPINE: Version 1.2.3 starting
PHP Notice: Constant FILTER_VALIDATE_IS_REGEX already defined in /usr/share/cacti/include/global_constants.php on line 388
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_ARRAY already defined in /usr/share/cacti/include/global_constants.php on line 389
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_LIST already defined in /usr/share/cacti/include/global_constants.php on line 390
PHP Notice: Constant FILTER_VALIDATE_IS_REGEX already defined in /usr/share/cacti/include/global_constants.php on line 388
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_ARRAY already defined in /usr/share/cacti/include/global_constants.php on line 389
PHP Notice: Constant FILTER_VALIDATE_IS_NUMERIC_LIST already defined in /usr/share/cacti/include/global_constants.php on line 390
2024/03/29 07:21:15 - SPINE: Poller[1] NOTE: Spine will support multithread device polling.
2024/03/29 07:21:15 - SPINE: Poller[1] DEBUG: Initial Value of Active Threads is 0
2024/03/29 07:21:15 - SPINE: Poller[1] SPINE: Active Threads is 1, Pending is 1
2024/03/29 07:21:15 - SPINE: Poller[1] Updating Full System Information Table
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DQ[4] RECACHE OID: .1.3.6.1.2.1.1.3.0, (assert: 1035388534 < output: 1035393538)
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] NOTE: There are '30' Polling Items for this Device
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[4736] SCRIPT: php /usr/share/cacti/scripts/cisco_eigrp_peer_uptime.php '10.74.249.87' 'readonly', output: [0]:2856 [1]:312
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2191] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.13, value: 20066872800
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2191] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.13, value: 21728824180
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.13, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.13, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.13, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[2194] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.13, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10673] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.1, value: 28003137190
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10673] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.1, value: 33455634510
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10674] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.3, value: 60751538678
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10674] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.3, value: 54580636257
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10675] SNMP: v2: 10.74.249.87, dsname: traffic_in, oid: .1.3.6.1.2.1.31.1.1.1.6.6, value: 20943648722
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10675] SNMP: v2: 10.74.249.87, dsname: traffic_out, oid: .1.3.6.1.2.1.31.1.1.1.10.6, value: 23083126108
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.1, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.1, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.1, value: 2
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10676] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.1, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.3, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.3, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.3, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10677] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.3, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: errors_in, oid: .1.3.6.1.2.1.2.2.1.14.6, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: errors_out, oid: .1.3.6.1.2.1.2.2.1.20.6, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: discards_in, oid: .1.3.6.1.2.1.2.2.1.13.6, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[10678] SNMP: v2: 10.74.249.87, dsname: discards_out, oid: .1.3.6.1.2.1.2.2.1.19.6, value: 0
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[967] SS[0] SERVER: /usr/share/cacti/scripts/ss_hstats.php ss_hstats '110' uptime, output: 1035393532
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[965] SNMP: v2: 10.74.249.87, dsname: 5min_cpu, oid: .1.3.6.1.4.1.9.2.1.58.0, value: 3
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[1771] SNMP: v2: 10.74.249.87, dsname: temp_generic, oid: .1.3.6.1.4.1.9.9.13.1.3.1.3.2, value: 19
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[1772] SNMP: v2: 10.74.249.87, dsname: mem_free, oid: .1.3.6.1.4.1.9.9.48.1.1.1.6.1, value: 241542868
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] DS[1773] SNMP: v2: 10.74.249.87, dsname: mem_used, oid: .1.3.6.1.4.1.9.9.48.1.1.1.5.1, value: 87691448
2024/03/29 07:21:15 - SPINE: Poller[1] Device[110] HT[1] Total Time: 0.59 Seconds
2024/03/29 07:21:15 - SPINE: Poller[1] POLLER: Active Threads is 0, Pending is 0
2024/03/29 07:21:16 - SPINE: Poller[1] SPINE: The Final Value of Threads is 0
2024/03/29 07:21:16 - SPINE: Poller[1] Time: 1.2257 s, Threads: 4, Devices: 1
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
still without graph?
Let the Cacti grow!
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
nope, still nothing. RRD issue maybe?
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
can you try Interface traffic in/out bits (64bit). You have to use at least snmp v2, not v1 for 64 bit counters.
Your cacti is very old, you should update to 1.2.26
Your cacti is very old, you should update to 1.2.26
Let the Cacti grow!
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
Already using v2, per the results returned from manually running the poller.
cacti version is 1.2.3. I would upgrade, but in my experience the risk of upgrading outweighs the reward. I've never had one work out successfully, and always end up having to start from scratch. In this case, the server is a VM, so I would at least have a better recovery path.
So, (and Im obviously not an expert here) it would seem that its not a poller issue, as 98% of the graphs are being polled and working properly, and also the results of manually polling this specific node indicates that expected results are being returned. Further, only NEW graphs of type 'traffic in/out bits (64bits)' are not displaying correctly; there are many many graphs of this type that continue to be working and accurate.
So, what would cause an rrd file to not be generated even though results are coming back from the polling process, and just for new graphs of one specific type of graph?
cacti version is 1.2.3. I would upgrade, but in my experience the risk of upgrading outweighs the reward. I've never had one work out successfully, and always end up having to start from scratch. In this case, the server is a VM, so I would at least have a better recovery path.
So, (and Im obviously not an expert here) it would seem that its not a poller issue, as 98% of the graphs are being polled and working properly, and also the results of manually polling this specific node indicates that expected results are being returned. Further, only NEW graphs of type 'traffic in/out bits (64bits)' are not displaying correctly; there are many many graphs of this type that continue to be working and accurate.
So, what would cause an rrd file to not be generated even though results are coming back from the polling process, and just for new graphs of one specific type of graph?
Re: Old graphs working, SOME new graphs indicate 'poller may not have run yet'
we have documentation where is described upgrade process:
https://docs.cacti.net/README.md#cacti-installation
Please start with upgrade. Supporting old versions is problematic for us. We are only small team. Maybe is your problem fixed in newer version.
https://docs.cacti.net/README.md#cacti-installation
Please start with upgrade. Supporting old versions is problematic for us. We are only small team. Maybe is your problem fixed in newer version.
Let the Cacti grow!
Who is online
Users browsing this forum: No registered users and 0 guests