Hi, my name is Chris, and I’m proud to be part of the Procentec Support Team. Based in the Netherlands, we work tirelessly to support a global network of distributors, engineers, and networks architects, helping to keep their PROFIBUS and PROFINET Industrial networks operational 24/7.
Probably, one of the most enjoyable but essential parts of our job is to participate in the training and certification of engineers. It allows us to experience, share and solve problems first-hand, and hopefully, to build relationships and friendships that benefit us both for the future. It was during one of these training sessions; an incident occurred that I would like to share with you.
Our client is one of the largest baggage handlers in the world. And when financially penalized for every single lost bag or missed connection, industrial network reliability becomes core to your business. Whilst exploring strategies for design, maintenance, and troubleshooting, it soon became apparent that even our client’s most experienced engineers were being frustrated by the dazzling number of possibilities to consider when diagnosing a network issue.
Multiple PROFIBUS and PROFINET networks can become incredibly complex, if not organized properly. And without the correct tools, a fast and accurate diagnosis was proving a challenge. Disputably lucky for me, the training session took place across the road from a major airport hub. Here, our client had a system that consistently suffered from a repetitive fault that could often bring the network down. Neither in-house engineers nor specialist contractors had been unable to pinpoint the problem. I guess at times, we have to practice what we preach. So after obtaining the correct passes, we took the training out of the classroom and into the workplace.
The sheer size and complexity of the site were almost incomprehensible. To put this in context, this was one of the most extensive sorting facilities in the world, in one of the largest airports in the world. Downtime wasn’t an option, and to compound this, it was mid-day on a Friday. Quite possibly, the busiest time of the week. Tens of thousands of pieces of luggage an hour would be processed and sent on their way. A delay of only a few minutes could have massive consequences. A significant multiple network outage could create such an immense backlog that could potentially take days to clear. And then came the financial penalties and fines to match. This was indeed a high stakes operation where network stability was paramount.
Our plan: we were going to attempt to tap into the live network, and then diagnose and rectify the fault, hopefully without triggering a global baggage catastrophe. So… no pressure then.
In a somewhat anti-climax to my story, going in via a tapping port, I plugged in the Procentec Mercury Diagnosis tool and was very swiftly able to detect the problem. The software told us that data packets were being dropped continuously from many devices on the network. Further analysis of the cycle times showed us that these, as is very common, had been left on Automatic. This, by default, sets all the update times to just 2ms. Put simply: way too fast for the size of this network.
So, after many previously unsuccessful attempts to solve this problem, and furthermore considering the inconvenience, frustration and financial penalties of any consequent network downtime, this had been a very expensive headache for our client. With their pre-knowledge of the system components, the newly-certified team was able to calculate the correct update times for the guilty devices. After applying these changes, I’m proud to report, they have had no further occurrences of this issue on the network. Phew!