REFERENCE - THE JANE SERIES
CHAPTER 5 - THE TOOL
Whatever it was, the object that sat in the middle of the cubical was pretty ugly and clunky, it was built of countless layers and it seemed to be sapping the people around it of their energy. It seemed to have an insatiable desire for human sacrifice. "Isn't it beautiful, isn't it just perfect." One of the worshipers commented. Jane was stuck for words, she managed to get out an honest, "It's truly unique, I've never seen anything quite like that before".
The attendant started to chant: "It is the best, it is the best, it is the best, it is the best!". The others followed his lead and repeated it again and again. With renewed assurance they were able ignore everything else that was going on around them. Jane decided it was best not to comment on the witty chant.
Jane was getting spooked by all of this. She asked one of the people on the side who seemed a little uneasy, "what is this all about?". "You don't know? ", he replied, shocked by her ignorance, "This is the shrine of Tartarus 19, it is our home-grown test-bench. It does anything we want, it is perfect in every way, and to think or ponder otherwise would be blasphemous". He went on to describe Nyx the companion tool that had kept them up all night.
Jane awoke from the nightmare with a cold sweat. Too many long meetings had taken their toll on her sleeping hours as well. Her next major step as verification lead was choosing the tools and HVL they would use for Verification; and along with that, to convince management that she had taken all considerations into account.
Last week she had met with Matt. Matt was from another division. His team had developed their own home-grown test bench (Erebus). At first the meeting went pretty smoothly. He explained how they had made a series of libraries in Perl and were using that to drive the RTL model. They had an expert on PLI, and they were able to port the environment to two simulators. Matt's team was anxious to proliferate their environment to other groups. Jane asked him a little about the constrained generation capabilities, the functional coverage, and checking capabilities. He seemed to take these questions as a personal assault. He seemed protective of his home-grown test-bench and appeared to resent any question that he felt implied it was less than perfect. Jane made a conscious effort to steer the conversation toward the positive sides of the test-bench, reusable models, and the test-language. Matt seemed to emphasize how many man years the company had already invested in the test-bench and showed her a roadmap of how they would simplify the test-language so that writing a test would be done by a few clicks with a GUI interface. The meeting ended with a mutual "we'll be in touch" and a friendly handshake.
To make sure she hadn't been too abrupt in eliminating the Erebus possibility, she went to speak to one of the engineers working on a project verified with the Erebus system. She spoke to Quincy who was in charge of the Chattahoochee project. Quincy showed her how they had used the Erebus system to create their tests, and spoke highly of the documentation and ease of use it writing tests. "However", he added "there are a few drawbacks."
"Jane, when you work with a system like this, you need to work how the tool intended for you to work.", he started "We have very specific test templates that we modify to create new tests. If we want to do random generation, or react during the test we really don't have a solution. We can't really handle internal events such as overflows.”
“On the random, our capability is somewhat limited. We can generate random data but can't really tie that generation to generation of other elements in our system, and we don't really have any way to measure what the random has done.”
“To describe the "ease of use," our users have an easy time writing tests, but they and they have a hell of a time debugging them. They have no idea what they're really doing since they copy templates which are 300-400 lines long,. Since it's not very self-checking we really don't know if those tests are really passing or if the test writer reached something that looked OK on the waveform and marked it ‘done’."
“We also keep finding bugs in the Erebus. This really makes things get ugly since we need to debug the RTL, the test-bench environment, the tests and the tool.”
“To summarize, for a follow-on project with minor changes, I can see the benefits of continuing with this type of system for which we have a large legacy of tests and clean and stable RTL. But if you are really trying to clean up an RTL and hit the target the first time around, you are going to need a more stable, capable and integrated tool.“
The following afternoon Jane contacted Woo from the Mobile Wireless Access Platform Component Networking Products Group (MWAP-CNPG) It was recently created in a brilliant re-org, and had a catchy name. Woo's team had been using a leading HVL for the past three products and seemed to be the company expert on its use.
Woo was glad to talk to Jane about the use of the HVL. He described their initial project with the tool.
"At first we started using the tool just like they showed us in the introductory course. They said, 'You can do anything you did in your previous environment, but with our tool,' and that's exactly what we did. We wrote verilog code, just in a different language. At the end of the project, we hadn't seen any significant improvement, schedule was still slipping and bugs were still getting caught later in the design cycle. We were about to throw the tool out the window, when someone suggested we might be doing something wrong. We brought in a consultant to see how we could improve for our next project."
Jane listened intently, since she could see her team falling into a similar first-project-trap.
"We decided to go top of the line and get someone in who had practical experience with both the HVL and with multiple projects. The consultant started us on a complete re-education on the HVL. Basically she said ‘The HVL is a tool for coverage driven random testing,' and showed us a series of concept shifts that would allow us to develop code that was focused on self-checking and random. She went through the "what is a coverage point" and how to define and code them, and then walked us through how a project should look from start to finish.”
“Needless to say, our next project flowed better, but we still hadn't convinced management of a dramatic change, the debugging was very taxing, and some of the little modules never ceased having corner cases.”
“The big change happened in our third project, when we analyzed some of the sticky points from the previous project. We saw that our block-level verification was lacking, and that we were re-inventing too many wheels. We also identified that our debug capabilities were not adequate.”
“To address the problems, we went on a reuse binge, started checking our RTL better prior to integration, and integrated many hooks to improve debug capabilities. We made these changes along with a series of other tweaking, and the project finally flowed smoothly...always a few bumps in our way, but it's nothing compared to where we were.”
Jane wrote down every word he said. Somehow she felt the most important thing she had learned from him is that sometimes you need to admit "When you just don't know what you're doing, asking for help is not a bad idea.". She made a mental note to really learn from Woo's mistakes so that her team could have a better shot the first time around.
When Jane had finished all her rounds, she felt sure that going with an industry leading HVL for her project was the right choice, but realized that the benefit would be limited if she would not be using the right methodology.
Design right. Verify right. Ace Verification
About Us | Careers | Contact Us
Copyright © 2006 Ace Verification Corp. All rights reserved.