CBUS network being intermittently flooded with strange traffic

Discussion in 'C-Bus Wired Hardware' started by asimcox, Jul 9, 2019.

  1. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi All,

    I've been chasing a problem on my CBUS install and am starting to run out of ideas to find the issue. I'm hoping someone might be able to head me in the right direction.
    The system I have has been running well for around 6 years. For the past year or so I have also had it working in an OpenHab home automation system via a Raspberry Pi which is also running a CGATE instance.

    The problem I am having is that I am getting large amounts of strange traffic flooding the network. These include messages turning random groups on or off. Some of these groups exist in my system, some do not. When the messages turn on or off a real group it is usually already on or off, but occasionally it does something like turning my sweep fans on full when they were previously off, and once it turned on my MRA amps full blast in the middle of the night. Luckily the MRA was switched to an unused source so it was not really loud. This has only been happening for about the last couple of months. The symptoms are similar to those documented elsewhere on this forum, but tracking the cause of the issue is proving fruitless so far. The traffic appears to come from my touchscreens (a black and white and a colour) and Multi Room audio matrix switcher. The problem seems to resolve itself after a few days or after I power down the whole system, only for it to appear again, sometime days later and sometimes within minutes. The toolkit diagnostics show voltages between 28V and 32V on all devices, the network burden present, and the clock enabled on 3 devices,
    From my reading, this type of issue is often associated with a power supply problem and the origin of the messages are likely not to be the units shown in the Toolkit Application Log but rather the output units themselves.
    I contacted Clipsal support about this and they told me how to check the power supplies on my system. Unfortunately every supply is giving the correct voltage and the voltage from positive to earth and negative to earth are less then 0.1V from each other. Briefly looking at the supply with a CRO to look for noise didn't help because I'm not sure what the waveform with the cbus traffic is supposed to look like.
    I had the diagnostic software running at one point, but from my limited experience with it, I wasn't able to get any useful information.
    Having confirmed this I still went through a process of taking each of the output units with supplies out of the network one by one and in combinations. This is a long process because of the intermittent nature of the problem. Sometimes it looks like I have found the culprit only to have the issue pop up a day or so later.
    I have more recently been going through a process of disconnecting units in the network and breaking the network at that point to see if the fault is cable or unit related. While I have found that some short sections of the network appear to be working ok, I haven't been able to isolate a particular section of devices or cable which is causing the fault leading me to think the problem could still be power supply related.
    I have completely swapped out each of the dimmer output units one by one with a brand new dimmer but it didn't help the situation.

    I have 49 devices on my network (plus an additional PCI I have on while trying to fix the network). According to Toolkit current consumption is 922mA and my supplies give 1950mA. Network impedance is 55ohms and the burden is present.
    I have the following units in the network
    4 x 8 channel dimmers with PS - one of which has the network burden turned on
    4 x 12 channel Voltage Free Relays with PS
    2 x Sweep Fan Controllers
    1 x 1 gang key input unit (plastic slimline)
    9 x 4 gang key input units (plastic slimline)
    8 x 5 gang saturn DLTs
    7 x 6 gang saturn key inputs
    6 x SENPIRIB PIRs
    1 x Black and white touchscreen with logic (no logic programmed, only some schedules)
    1 x Colour C-Touch Spectrum touchscreen with logic (no logic programmed)
    1 x 1st gen wiser (currently only being used as a CNI when required)
    1 x CBUS SIM (connected to the raspberry Pi)
    1 x Multi room audio switcher
    3 x MRA 25W amplifiers.

    Another issue has appeared in the last week or so. One of the MRA 25W amplifiers which was previously working fine has stopped responding.It still shows up on a toolkit network scan, but if I try to open it in toolkit, it goes to 30% loading, briefly goes to 40% then quickly back to 10, 20, then 30% and stays there until toolkit gives an error saying it failed to load the unit programming. (error 3036). According to a toolkit scan summary the firmware on this unit is v5.4.00 which is the same as the other MRA units. I'm hoping this is not the first failure of more to come due to the issue I have been having.

    If anyone has any ideas what I can try, please let me know. I can send through more details and screen captures if that will help.

    Anthony
     
    Last edited: Jul 9, 2019
    asimcox, Jul 9, 2019
    #1
  2. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    Hi Anthony
    I get the impression you are using CBus Toolkit Application log to monitor network activity.
    Have you had a look at the CGate logs on the raspberry pi?
    By default it should be always logging at a high level and located at
    /usr/local/bin/cgate/logs
    I haven't had an issue as you describe but if messages are being sent unexpectedly I think it is going to be one of your logic capable devices or whatever is talking to CGate on the RPI.
     
    DarylMc, Jul 9, 2019
    #2
  3. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Daryl,

    Thanks for your response.
    I also had some suspicions about the Rasberry Pi which is why I borrowed a serial PCI from my brother to monitor the network as well.
    I have tried moving the Raspberry Pi CGate to use the PCI from my wiser rather than the SIM it was connected to just in case that was causing an issue (and the SIM is now disconnected from the network). I have also completely disconnected the Pi. None of this seems to have helped. I still get the messages.

    My Raspberry Pi is back on the network using the wiser PCI and I end up with between 4 and 10 log files per day when the network is acting up. When it is not, I get one file.
    I had previously looked at the log files but they didn't seem to give much more information than the toolkit log except there were a fair few commands which appear to be from the OpenHab CBUS binding which were just NOOPs. I'm assuming those are a keep alive as every now and then the OpenHab binding does a poll of the network status as well. The logs which show the "phantom" messages seemed to show they were indeed coming from each of the "smarter" or "unusual" devices, these being the touchscreens, Wiser (when it's PCI is not being used by the Pi) and the MRA switcher and Amplifiers. The only problem is I have taken each of these out of the network one at a time and the other devices just take over and issue more messages.
    However after your prompting I have noticed one line I had missed in the cgate logs which precedes the phantom messages, but doesn't appear in the toolkit logs. This command is

    20190710-122940 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C8380001300132013363 processed by network.

    The response is different every time, but the first part of the command is the same. The only problem is I don't know cgate well enough to know what this command is. From what I can see from the cgate manual, command 734 is a "Response line:" which I had assumed was a response to another command, but since there doesn't appear to be a preceding command that doesn't make much sense. If I could find out which group/unit the 3c594f90-82df-1037-ba1e-817d69fcadce refers to it might give me a clue. I have worked out the identifier for some groups in the network by scanning both the cgate log and checking the corresponding toolkit log. (eg 3c60a290-82df-1037-ba4b-817d69fcadce appears to be group 048 GarageHallLight )

    I have copied a snippet of the cgate logs and the corresponding toolkit logs below. Hopefully someone might be able to shed some light on this a bit further.


    CGATE LOGS
    20190710-122940 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C8380001300132013363 processed by network.

    20190710-122940 730 //SIMCOX/254/56/48 3c60a290-82df-1037-ba4b-817d69fcadce new level=0 sourceunit=200 ramptime=0
    20190710-122940 730 //SIMCOX/254/56/50 3c60f0b0-82df-1037-ba4d-817d69fcadce new level=0 sourceunit=200 ramptime=0
    20190710-122940 730 //SIMCOX/254/56/51 3c6117c0-82df-1037-ba4e-817d69fcadce new level=0 sourceunit=200 ramptime=0
    20190710-122943 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C8380079107914E5 processed by network.
    20190710-122943 730 //SIMCOX/254/56/16 3c642500-82df-1037-ba7c-817d69fcadce new level=255 sourceunit=200 ramptime=0
    20190710-122943 730 //SIMCOX/254/56/20 3c64c140-82df-1037-ba81-817d69fcadce new level=255 sourceunit=200 ramptime=0
    20190710-122946 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C8380001000108790A6E processed by network.
    20190710-122946 730 //SIMCOX/254/56/0 3c5c35c0-82df-1037-ba22-817d69fcadce new level=0 sourceunit=200 ramptime=0
    20190710-122946 730 //SIMCOX/254/56/8 3c63fdf0-82df-1037-ba71-817d69fcadce new level=0 sourceunit=200 ramptime=0
    20190710-122946 730 //SIMCOX/254/56/10 3c63d6e0-82df-1037-ba75-817d69fcadce new level=255 sourceunit=200 ramptime=0
    20190710-122946 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C9380001060107EB processed by network.
    20190710-122946 730 //SIMCOX/254/56/6 3c6361b0-82df-1037-ba66-817d69fcadce new level=0 sourceunit=201 ramptime=0
    20190710-122946 730 //SIMCOX/254/56/7 3c5f6a10-82df-1037-ba2c-817d69fcadce new level=0 sourceunit=201 ramptime=0
    20190710-122949 761 cmd63 - Command: [25618] noop
    20190710-122949 766 cmd63 - Response: [25618] 200 OK.
    20190710-122952 734 //SIMCOX/254 3c594f90-82df-1037-ba1e-817d69fcadce response: 05C83800792A58 processed by network.
    20190710-122952 730 //SIMCOX/254/56/42 3c60f0b0-82df-1037-ba42-817d69fcadce new level=255 sourceunit=200 ramptime=0

    Toolkit Logs

    DateTime= 10/07/2019 12:29:39.849 App= 056 Lighting Group= 048 GarageHallLight Unit= 200 PC_CTBL/UPSTAIRS Event= Group off
    DateTime= 10/07/2019 12:29:39.890 App= 056 Lighting Group= 050 GarageLight Unit= 200 PC_CTBL/UPSTAIRS Event= Group off
    DateTime= 10/07/2019 12:29:39.893 App= 056 Lighting Group= 051 GarageExternalRear Unit= 200 PC_CTBL/UPSTAIRS Event= Group off
    DateTime= 10/07/2019 12:29:42.837 App= 056 Lighting Group= 016 StudyOuter Unit= 200 PC_CTBL/UPSTAIRS Event= Group on
    DateTime= 10/07/2019 12:29:42.878 App= 056 Lighting Group= 020 StudyInner Unit= 200 PC_CTBL/UPSTAIRS Event= Group on
    DateTime= 10/07/2019 12:29:45.852 App= 056 Lighting Group= 000 KitchenIslandLights Unit= 200 PC_CTBL/UPSTAIRS Event= Group off
    DateTime= 10/07/2019 12:29:45.894 App= 056 Lighting Group= 008 RumpusFront Unit= 200 PC_CTBL/UPSTAIRS Event= Group off
    DateTime= 10/07/2019 12:29:45.896 App= 056 Lighting Group= 010 BedRoom3Light Unit= 200 PC_CTBL/UPSTAIRS Event= Group on
    DateTime= 10/07/2019 12:29:45.938 App= 056 Lighting Group= 006 UpStairsBathroomLight Unit= 201 PC_CTDL/FAMILY Event= Group off
    DateTime= 10/07/2019 12:29:45.979 App= 056 Lighting Group= 007 EnsuiteLight Unit= 201 PC_CTDL/FAMILY Event= Group off
    DateTime= 10/07/2019 12:29:51.823 App= 056 Lighting Group= 042 WorkshopLight Unit= 200 PC_CTBL/UPSTAIRS Event= Group on
    I think I recall there is a way to get more in depth logging from CGate but I'll have to look into that a bit later. I am just on the way out to return my brother's PCI to him.

    Anthony

    Anthony
     
    asimcox, Jul 10, 2019
    #3
  4. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    Hello Antony
    Hopefully someone else might see something in those logs.
    CGate log settings are in the CGate config file and you can read in the CGate manual how to change them.
    They should already be at the highest level 9.
    It is interesting that there are many log files on the days which have issues because CGate will start a new log file daily, every time it restarts or also when the file gets above 5MB.
    I'd start with the Wiser but I still think you should get all the Touchscreens and RPI off the CBus network and let it run for a while if you haven't already.
     
    DarylMc, Jul 10, 2019
    #4
  5. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    Since you are using a remote CGate it occurs to me an easy way to break the network might be if the project xml file on the RPI was different to the xml on your Windows machine and you transferred some changes to your network recently with CBus Toolkit using the local CGate.
     
    DarylMc, Jul 10, 2019
    #5
  6. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Daryl,

    Thanks for your ideas. I appreciate your help.

    Yes, I am aware of having a different xml on the remote and local. I don't run CGate locally on my normal windows machine. All toolkit work is done via the remote CGate. The test machine I had setup was a laptop with my brother's CNI, and it was only monitoring or for using the diagnostic software. I didn't make any changes to the network with this machine.

    The logs are all 5MB. Four days ago there were 5 files, three days ago there were 8 of these, the last two days there were 4, and so far up to mid day today only 1 which is around 2MB. So guess what?? I have some time today to track issues down and of course I can't get it to show a single fault!! According to the logs, it has not had a single issue all day.

    While looking through the logs I have noticed that OpenHab generates a lot of commands when checking network status. The version of the binding I am using seems to do a brief scan and an in depth scan. Both check addresses not in use. The in depth scan actually checks through every address for every in use application and every possible level/selector, even if they are not defined. I just had a look and some of the great folks working on the CBUS OpenHab binding have been really active lately, and it looks like there might be a newer binding available. I might try backing up my existing Pi and trying their binding. If they had made it smart enough to just look at in use addresses it might cut down the network traffic and log use, and maybe make it faster checking network status.
    I might even spin up a whole new OpenHab/CGATE since it just means using a new memory card. Having to do all the configuration in OpenHab again is likely to be a PITA though.

    Anthony
     
    asimcox, Jul 11, 2019
    #6
  7. asimcox

    ashleigh Moderator

    Joined:
    Aug 4, 2004
    Messages:
    2,392
    Likes Received:
    24
    Location:
    Adelaide, South Australia
    Dumb question: are you using a SIM to get into the C-Bus network? If you are, send me a PM,
     
    ashleigh, Jul 13, 2019
    #7
  8. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Ashleigh

    I normally use a SIM from a Cbus alarm system I never installed, but while tracking down the issues I have switched to using the CNI from my wiser 1.
    Anthony
     
    asimcox, Jul 13, 2019
    #8
  9. asimcox

    NickD Moderator

    Joined:
    Nov 1, 2004
    Messages:
    1,420
    Likes Received:
    62
    Location:
    Adelaide
    A few comments.

    That network impedance is too low... is that a typo? Regardless, with a network that size you should not have a burden enabled.

    This scan failure at 30% is normal for an MRA unit if the unit is not powered... it can read the parameters from the PCI in the unit but can't read from the second (main) processor because it's not responding.

    This is a message from unit $C8 (200 decimal) the commands are 01 30, 01 32, and 01 33. 01 is an OFF command, and it's being sent to groups $30, $32, and $33 (48, 50, and 51).

    In the other messages you're seeing similar things on different groups from units C8 and C9 (200, 201). My guess is these are units trying to correct MMI errors.

    If you turn on event level 9 in the logs I think you should be able to see the MMIs which might be able to confirm this.

    If it is MMI errors, this could be due to poor network communications.. I would try turning off the burden and possibly removing the MRA Matrix switcher (the 1950mA suggests you have an old one with an integrated power supply).

    Nick
     
    NickD, Jul 16, 2019
    #9
  10. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Nick,

    Thanks for your reply.

    Yes, sorry, there is a 5 missing from the impedance. It should be 555.

    When this all started happening I did try switching off the burden, but that made the network comms so unreliable that it was hard to get it to do anything without erroring out. I actually had to isolate unit 1 just to turn the burden back on.

    At the moment, the amplifiers are being powered by the Matrix Switcher. All three are configured the same but only this one is showing this problem. This particular unit is located next to one of the others and the cbus daisy chains between them. I have tried swapping cabling etc as part of the fault finding. I have some external power supplies for these which I have never used so I will try powering this unit from a supply and see what happens. That however starts me wondering about the stability of the switcher power supply so perhaps the network issues might be related. I have never loaded the switcher up much. I have a bunch of 10W amps I haven't installed yet, but now the kids are no longer toddlers, I was intending to get them up and running. I might have to check out the power supply on the switcher before I try installing those.

    Unit 200 is one of my touch screens and 201 is the other one. In all of the logs it has been these two screens or the MRA Switcher/amps or Wiser which have been the ones generating all the traffic. However removing one of these from the network just transfers the traffic generation to one of the others.

    Unfortunately (fortunately) the network has been stable and hasn't missed a beat since last Thursday, so it has behaved for nearly a week. As soon as it starts playing up again I will try your suggestions and see if it helps the situation. I might try to get hold of a hardware burden so I can easily remove it without reprogramming. Unfortunately they only seem to sell those in packs of 10. I'll report back once the network fails again and I get a chance to test any or all of this out.

    I have been suspicious about the cabling for this install. The electrical company who originally put this in didn't inspire a lot of confidence and they had no clue when it came to programming. It was installed by the builders electrician. Seeing some of their cabling (mains and CBUS cat 5) made me shudder. I am a licenced cabler and installed all the rest of the data/comms cabling in the house while it was being built, but the builder said all the electrical had to be done by their electrician. The comment made by the electrician's apprentice when he saw my cabling was "I hope you don't expect our cables to look as neat as yours" which didn't inspire a lot of confidence. As part of this fault finding when this issue came up I have been re-terminating and fixing up a lot of the pink cable runs as they are pretty poor. However it has been working for about 6 years with only a few glitches so I guess I might just be a bit of a perfectionist, especially since it is my place.

    Anthony
     
    asimcox, Jul 16, 2019
    #10
  11. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Nick,

    So the fun continues ......

    Last night the network went haywire again after more than a week where everything was normal.

    This time all of the PIRs have ceased functioning. I can still see them on the network, but none of them are working. Even the red light in the sensor itself is not turning on. I tried resetting one of them and reprogramming it, but this made no difference.

    Then this morning none of the switches in the network functioned at all. When I looked in toolkit, I couldn't even turn loads on or off. When I looked at a physical units programming it was screwed up and all of the physical units key functions referred to the correct group, but instead of the group name, they showed the group number. The database units were all still correct. I transferred the programming for one of the dimmers from the database to the physical unit and I was then able to use toolbox to turn those loads on. I then transferred a switch programming and then I could also use that switch. So in the end I transferred the whole database to the physical units and the switches all started working again, but the PIRs remain non functional.

    I have tried turning off the burden which is on unit one (a dimmer), but the network comms becomes very bad. A network scan from toolbox only finds less than half the units, and finds some other units at addresses which are unused. Each scan shows different units and different phantom units, and most times the scan errors out. Turning the burden back on restores better communication although with all the extra traffic being generated it is slow.

    I have tried powering down the entire network a couple of times which sometimes stops the issues, but they remain.

    All of the MRA devices have been removed from the network.

    Tomorrow's job will be to pull the touchscreens from the network to see what effect that has. I am also going to try physically disconnecting the PIRs in case they are causing problems, but I am doubtful that all of them would have failed at the same time.

    I have tried to set the CGATE logging level to 9 and restarted CGATE and toolbox, but it doesn't seem to make any difference to the logging. Is the latest version of CGATE already logging at level 9?. I tried the instructions at https://www.cbusforums.com/threads/capturing-c-gate-logs.4724/. Is there some other way I should be telling CGATE to log at a higher level?

    Anthony
     
    asimcox, Jul 20, 2019
    #11
  12. asimcox

    NickD Moderator

    Joined:
    Nov 1, 2004
    Messages:
    1,420
    Likes Received:
    62
    Location:
    Adelaide
    This seems wierd... the fact that it read the right group number from the unit but couldn't correlate it with the tag from the CGate project to me suggests an issue with C-Gate and/or the project file.

    Personally I am skeptical of running C-Gate on a Raspberry Pi... it might work fine as an interface for OpenHAB or something like that, but I would be staying away from it for using Toolkit and managing your network.. just because there is no validation done on that platform.

    To me the symptoms you describe still seem like an underlying communications problem... it could be a faulty unit, a faulty power supply, or an intermittent bad connection or water ingress or something like that.

    Nick
     
    NickD, Jul 23, 2019
    #12
    Ashley likes this.
  13. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Nick,

    I had a similar thought about the tags, however as I mentioned, the whole system was unresponsive, and it was only after I re-downloaded parts and then eventually all of the programming that it all became functional again, albeit with all of the additional traffic still happening. I can't see how this achieved what it did, but it made my wife happy when it worked so I was grateful it did.

    In normal situations I have the CGate running on a raspberry Pi primarily for OpenHAB as I don't change things or use toolkit very often (less than once a year). It has only been since I have been having the network problems that I have been monitoring it more closely. The OpenHAB install has been a bit of a life saver for the family while I have been disconnecting pink cable to track down the fault as they have been able to still operate the lights in the house.

    However I am also using a second CNI with a laptop to monitor the network as well at the moment. (I managed to hold on to my brother's CNI while I am trying to solve the problems). That second CNI is running it's own CGate instance which was downloaded from the network, and I haven't been using it to make any changes. I initially used it because I couldn't get the diagnostics software to talk to either the SIM module connected to the Raspberry Pi or the CNI from my Wiser 1.

    At the moment, I have removed the MRA components as well as the touchscreens. The Wiser CNI is disconnected from the Wiser and is being used just as a CNI for Toolkit. This means the wiser can be eliminated as a cause, and so can the SIM I was using with the Raspberry Pi. I am still having problems with network reliability with this all disconnected although I am not seeing the random traffic.The symptoms now manifest as unreliable switches with indicators that work as expected when you use them but don't always actually switch the load. Either that or they flash when you try to use them. The PIR's also sometimes miss turning on lights and more regularly don't turn them off. All of these seem to point to network wiring.

    Before removing the touchscreens I disconnected a big chunk of the network at the switchboard end and the PIRs on the remaining section started functioning again. On reconnecting the network segment the other PIRs came back to operation, although the garage PIR is operating minus the red light that is usually turned on when it is detecting movement.

    All of this also points me to issues with the wiring, and as I mentioned I have always been a bit suspicious of it.
    From what I have been able to see so far, the power supplies all seem to be operating properly, and I have recently checked them a second time to make sure nothing has changed there.

    I have a couple of questions with regard to the wiring. Hopefully someone might be able to give me some advice.

    1. I am aware that CBUS networks can have network segments teeing off the main wiring, but is there a limit to how many of these there can be?
    2. How are the ends of a network segment supposed to be terminated? Is this done by having a unit connected? I was previously aware of two segments which had and extra piece of pink cable at the end (which I have removed), and upon removing one of the touchscreens which I thought was at the end of a segment, I found it had 2 wires connected, and the second went into the inaccessible ceiling somewhere but is not connected to a device. I know there is no device by process of elimination using scans of the network and there being an open circuit between all wires in that cable. I suspect there may be more of these in the install which I will probably find as I work my way through each of the switches to re-terminate cables. I would have thought transmission line theory would tend to indicate unterminated cables may have reflection issues.
    3. My network all runs to a sub-board where all of the relays and dimmers are located with their inbuilt power supplies. There are three segments which have been wired into the first relay and then all the other relays and dimmers daisy chain from the first one. This means the power supplies are at the end of the network, I assume for cabling power loss mitigation it would be better to have one of the segments connected to the other end of the daisy chain so the power supplies where more "in the middle" of the network. Would this be a correct assumption? If so, I think I could probably do some re-wiring of accessible cables and just end up with two pink cables in the switchboard and in the process remove a couple of the network segments and have more devices on the other side of the power supplies. If I can get hold of a network bridge I could also possibly break the network into two networks to help track down the fault.
    4. Since it is clear that my network seems to need a burden, is there a place in the network this should ideally be placed? Since my network currently uses a software activated burden in one of the dimmers, the burden is currently at one end of the network. If I did the re-wiring above the burden would be further into the middle of the network.

    Thanks
    Anthony
     
    asimcox, Jul 23, 2019
    #13
  14. asimcox

    NickD Moderator

    Joined:
    Nov 1, 2004
    Messages:
    1,420
    Likes Received:
    62
    Location:
    Adelaide
    No, the only limitations are a total of less than 1km of C-Bus cable, and less than 2A of power supply (having lots more power supply than you need is also less than desirable, but not forbidden.

    This is not an issue for C-Bus. The only thing you need to do is fit a burden if you network impedance (as calculated by Toolkit) is too high.

    The only issue with centrally located power supplies is voltage drop... if the units at the far ends of your network have sufficient voltage then there's no great reason to do this, but if it's easy enough to do it wouldn't hurt.

    The location of the burden shouldn't matter, but I don't agree that it's clear your network needs a burden...
    555 ohms is at the low end of the acceptable impedance range.. if you disable the burden this you should see the impedance go back up to around 1250 ohms, which is fine. Normally you should only need to be be adding a burden if your impedance is above about 1500 ohms. Also, the burden in your dimmer/relay etc will actually only be active if the unit is at unit address 1... is that actually the case?

    Nick
     
    NickD, Jul 24, 2019
    #14
  15. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    Hello Anthony
    Have you got two instances of CGate running on the network simultaneously?
     
    DarylMc, Jul 24, 2019
    #15
  16. asimcox

    asimcox

    Joined:
    Jul 19, 2017
    Messages:
    23
    Likes Received:
    0
    Location:
    Melbourne
    Hi Guys,

    Thanks for the continued replies and help. It is very appreciated.

    Nick, it looks like I will have to keep working through my network to try to isolate the issue. I was hoping the segments or terminations might have been a problem. At least then I could have a target to work on.
    While I do have around 10km of cat 6, coax and alarm cables in the house, I'm pretty sure there is no where near 1000m of pink cable and probably not even half of that (I think there was less than 2 boxes of cable put into the house)
    Voltage as displayed by toolkit and verified at points on the network is pretty good (28V being the lowest). I am close to the 2A with the supplies I have, but still just below that. I have a circuit breaker which switches the power to my 4 12 way relays so I can (and have) reduced network current by half when I was trying to test and eliminate the relay units as a cause of my problem. (the loads on those relays are on other breakers) The network voltage is still ok even with only half of the power supplies operating.

    My burden comment referred to what happens when I turn off the burden. Even when the network is functioning correctly, turning the burden off pretty much stops most communications. When the network is faulting as it is at the moment it just compounds the problems. Hopefully if my problem is a wiring one, I might be able to do away with the burden once the wiring is fixed. The dimmer with the burden turned on is unit 1.

    Just out of interest (and clutching at another straw) do you know of any instances where induced noise has caused issues with CBus networks? Hopefully with two separate twisted pairs carrying the network signal that would not be a problem.

    Daryl,

    Is there a chance that two instances of CGate running on a network can damage it? Only one of my instances is used for any programming. The other one is only there to monitor what is going on when I have a failure of the network. I am using a serial CNI I borrowed from my brother after this problem started.

    I normally only have one CGate instance running on the network. This is usually a Raspberry Pi with a SIM module and it had been working really well for quite a while. When this fault popped up, one of my tests was to disconnect the Pi/SIM to make sure it wasn't causing the problem. To see what was happening when that was disconnected, I connected up another CNI and ran CGate/toolkit on a windows laptop. Since then I have reconnected the Raspberry Pi, but switched over to using the CNI from my old Wiser 1 just to eliminate the SIM as a cause of the problem. The SIM is not connected to anything at the moment.
    When I am trying to track down the fault I currently have, I have had both the PI and the laptop running CGate on the same network on occasion, but not all the time. When the fault is not present, the Pi CGate is the only one running. When that weird problem with the tags not pulling through and the PIRs becoming inoperative started the other day, the Pi was the only CGate on the network. The Pi has my CGate network config file. The laptop one just grabs the current setup from the network, so groups etc have no tags which is what I expect. I have got very good at knowing device and group numbers on my network from comparing the logs from the two toolkits :).

    Anthony
     
    asimcox, Jul 24, 2019
    #16
  17. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    Hello Anthony
    Nick is the person to listen to but I just wanted to clarify the setup.

    If you have two CGates running simultaneously I think it could give more chances for something to go wrong.
    In the past it has been the case that the CGate version for Windows and the standalone version are not identical and previously had issues programming some CBus units eg EDLT.
    As Nick said the Windows version of CGate is the one which is most commonly tested by use.
     
    DarylMc, Jul 24, 2019
    #17
  18. asimcox

    DarylMc

    Joined:
    Mar 24, 2006
    Messages:
    1,308
    Likes Received:
    49
    Location:
    Cleveland, QLD, Australia
    @NickD
    Last year I was testing a RPI with Homebridge and CGate in a house and had some unexpected action going on with the lighting.
    Some time not long after that the 10 year old CTC2 lost the ability to communicate to the CBus network.
    Is that the sort of problem which might cause unexpected corrections on the network?
     
    DarylMc, Jul 24, 2019
    #18
  19. asimcox

    intelectsol

    Joined:
    Jan 5, 2009
    Messages:
    15
    Likes Received:
    1
    Location:
    Sydney
    Was this issue resolved?
    I am having a similar problem with my C-Bus Network...
     
    intelectsol, Jul 6, 2020
    #19
  20. asimcox

    chromus

    Joined:
    Jan 27, 2014
    Messages:
    422
    Likes Received:
    50
    Location:
    Perth
    What application number is the traffic on?

    The 2 most common are:
    56 is lighting
    202 is scene
     
    chromus, Jul 7, 2020
    #20
Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.