Internet Engineering Task Force Vinay Vasudeva INTERNET DRAFT Vishal Rai draft-rai-test-auto-devices-00.txt Infosys Technologies Ltd November 14 2001 Expires: May 2002 Test Automation for Layer2/3 devices in the Internetworking Environment Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Its intended to be an informational document. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated,replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Comments should be addressed to the authors, Abstract Test Automation is a process that is being adopted by vendors aggresively to broaden the scope for testing. Automation of testing of L2/L3 devices can also enable faster and 'always on' mode of test environment in the Test labs, thus maximizing usage of lab infrastructure, round the clock, without human intervention. More often than not, lack of standards, can cause an automation architecture to go awry and the cost of the same could be catastrophic. Hence this informational draft endeavours to bring forth a standard framework for automation, which can be referred to, by test teams of various network equipment vendors and users. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Test Automation Organizational Mindset........................3 2.1 Test Documentation ... . . . . . . . . . . . . . . . . . . . .4 2.2 Test Specifications . . . . . . . . . . . . . . . . . . . . . 5 2.3 Test result analysis report . . . . . . . . . . . . . . . . . 5 2.4. The GUI Paradigm....... . . . . . . . . . . . . . . . . . . . 5 3. Cost-Effective Automated Testing ..... . . . . . . . . . . . . . . . . . . . . . . . ..7 4. The Record/Playback Myth.. . . . . . . . . . . . . . . . . . .7 5. Viable Automated Testing Methodologies ..... . . . . . . . . . . . . . . . . . . . . .8 5.1 The "Functional Decomposition" Method ......... . . . . . . . . . . . . . . . . . . . .. . ..8 5.2 Test Plan Driven Method......................................10 6. Imparting Cost and Process Efficiency to Test Automation....... . . . . . . . . . . . .15 7. Security.....................................................18 8. Authors Address..............................................18 9. Acknowledgement............................... ..............18 10. Referecnces..................................................18 1. Introduction Test Automation is an integral part of product releases in the networking world today, especially to ensure that the newer features do not break the older ones. Preceeding automation, is a formalized and effective "manual testing process" that is practised with a detailed and mature understanding of the test cases, their coverage, domain and implementation issues including the feature under test and the configuration commands specific to the device under test. The two basic constituents of any test approach in this regard would include - Detailed test cases, including predictable "expected results", which have been developed from RFCs and customized Test Cases and Design documentation. - A standalone preferably, GUI-based Integrated Test Environment, encompassing some Traffic Generator, Device under Test and a Workstation to support and run the application and other third party tools. If the existing testing process do not include the above points, it would be difficult to acheive any effective use of a automation test platform. There is no real point in trying to automate something that does not exist. We must first establish an effective testing process with well-defined Test Matrices, Environment and setup. The maximum benefits of automated test tools are derived in a regression testing environment. This means that we must have or must develop a matrix or suite of detailed test cases that are repeatable, covering all possible options/parameters of available features on the DUT, and this suite of tests is run every time there is a change to the networking software image to ensure that the changes do not produce unintended consequences. Also it has to be resilient and scalable enough to adapt future extensions, changes and enhancements without involving much of a rework by testing teams. An "automated test script" is a program which forms 'genes' in the entire automation setup. 2. Test Automation Organizational Mindset - Establish clear and reasonable expectations as to what can and what cannot be accomplished with automated testing in ones organization. - Getting a clear idea of what one is really getting into. - Establish what percentage of testing is a good candidate for automation - Eliminate overly complex or one-of-a kind tests as candidates - Get a clear understanding of the requirements which must be met in order to be successful with automated testing - An effective manual testing process must exist before automation is possible. "Ad hoc" testing cannot be automated. One should have: * Detailed, repeatable test cases, which contain exact expected results * A standalone test environment with a restorable database * Adopt a viable, cost-effective methodology. * Record/Playback Scheme is too costly to maintain and is ineffective in the long term * Functional Decomposition method is workable, but is not as cost-effective as a totally data-driven method * Test Plan driven method is the most cost-effective: * Test cases developed can be used for manual testing, and are automatically usable for automated testing * Select a tool that will allow you to implement automated testing in a way that conforms to your long-term testing strategy. The General test development life cycle hence contains the following components: 2.1. Test Documentation Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions: * What to test? * How to test? * What are the results? Test Case Suite Outline 1. BACKGROUND 2. INTRODUCTION 3. ASSUMPTIONS 4. TEST ITEMS List each of the items to be tested. 5. FEATURES TO BE TESTED List each of the features (functions or requirements) which will be tested or demonstrated by the test. 6. FEATURES NOT TO BE TESTED Explicitly lists each feature, function, or requirement which won't be tested and why not. 7. APPROACH Describe the data flows and test philosophy. Simulation or Live execution, Etc. 8. ITEM PASS/FAIL CRITERIA Blanket statement Itemized list of expected output and tolerances 9. SUSPENSION/RESUMPTION CRITERIA Must the test run from start to completion? Under what circumstances may it be resumed in the middle? Establish check-points in long tests. 10.TEST DELIVERABLES What, besides Application, will be delivered? Test report Test tools 11.ENVIRONMENTAL NEEDS Network Isolation and Security clearance Office space & equipment Hardware/software requirements 13.GUI Based User friendly look and feel. 2.2 Test Specifications The test case specifications should be developed from the test plan and are the second phase of the automation test development life cycle. The test specification should explain how" to implement the test cases described in the test plan. Test Specification Items Each test specification should contain the following items: Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case number. Title: is the title of the test. ScriptName: is the script(program) name containing the test. Author: is the person who wrote the test specification. Date: is the date of the last revision to the test case. Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test. Expected Error(s): Describes any errors expected Reference(s): Lists reference documentation used to design the specification. Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Feature Under Test (IUT) and the test engine. Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test. 2.3. Test Results Analysis Report The Test Results Analysis Report is an analysis of the results of running tests. The results analysis provide management and the development team with a readout of the feature and application quality. The following sections should be included in the results analysis: * Management Summary * Test Results Analysis * Test Logs/Traces 2.4 The GUI Paradigm Basic paradigm for GUI-based automated regression testing should incorporate a. Plug and Run the tests. b. If the Test Fails, generate a bug report. Flag down this test as failure, to record this as a 'To be Verified' test later, when the bug is rectified . c. If the test passes, integrate it with the rest after seeing that it works fine. Capture the screen output at the end of the test. Save the test case and the output. d. Next time, run the test case and compare its output to the previous saved results. This will give an indicative analysis of the feature under test. Issues : - Developing GUI Based Automation is not cheap. The initial phase could take as much as thrice the time to develop,create, verify, and minimally document the automated test vis a vis manual testing. However, this in the long run, works out to be extremely useful and leads to substantial savings in the complete testing cycle. Most of the tests would be worth automating, but for those tests that you run only once or twice, this approach is inefficient. Some people recommend that testers automate 100% of their test cases. We should be judicious with this as we create and run many black box tests only once. To automate these one-shot tests, we would have to spend substantially more effort and cost per test. Its a call between lower coverage at a higher cost per test. - This approach creates risks of additional costs. We all know that the cost of finding and fixing bugs increases over time. As a product gets closer to its (scheduled) release date more people start getting associated with it (E.g in-house beta users, friendly customers, documentation Teams for manuals and marketing materials etc.). The later you find and fix significant bugs, the more of these people's time will be wasted. If you spend most of your early testing time writing test scripts, you will delay finding bugs until later, when they are more expensive. Hence a suggested approach is synchronize the automation as a part of release plan as well. - These tests may not powerful. The only tests you automate are tests that have already passed. The number of new bugs one finds this way range from 6% to 30% of the total bugs submitted. - In practice, many test groups automate only the easy-to-run tests to avoid the mundane feeling of doing the same tests again. Sometimes automation may fall short in comparison to the increasingly harsh testing done by a skilled manual tester. - Recognize that test automation development is like any other software development activity. o It is code, even if the programming language is funky. o Within an application dedicated to testing a program, every test case is a feature. o From the viewpoint of the automated test application, every aspect of the underlying platform (on which you are testing) qualifies as features to be tested as well. Test Automation Application software developers (in this case, the testers) must: * understand the requirements; * adopt an architecture that allows us to efficiently develop, integrate, scale and maintain the features and data; * adopt and live with standards. * be disciplined. Without it,we should be prepared to fail. 3.Cost-Effective Automated Testing Automated testing is expensive. It does not replace the need for manual testing or enable you to "down-size" your testing department. Automated testing is an addition to your testing process. According to statistics, in the initial phase ,it can take between 3 to 10 times as long (or longer) to develop, verify, and document an automated test case than to create and execute a test case manually . This is especially true if you elect to use the "record/playback" feature (contained in most test tools) as your primary automated testing methodology. Record/Playback is the least cost-effective method of automating test cases. Automated testing can be made to be cost-effective, however, if some practicality is applied to the process: * Choose a test tool that best fits the testing requirements of your organization or company. * Realize that it doesn't make sense to automate all tests. Overly complex tests are often more trouble than they are worth to automate. Concentrate on automating the majority of your tests, which are probably fairly straightforward. Leave the overly complex tests for manual testing. * Avoid using "Record/Playback" as a method of automating testing. This method is fraught with problems, and is the most costly (time consuming) of all methods over the long term. The record/playback feature of the test tool may be useful for determining how the testing is trying to process or interact with the Feature under test, and can give you some ideas about how to develop your test scripts, but beyond that, its usefulness ends quickly. * Adopt a traffic-data-driven automated testing methodology. This allows you to develop automated test scripts that are more "generic", requiring only that the input and expected results be updated. 4. Record/Playback Myth The Record/Playback methodolgy emphasizes that the testers first run the tests manually and record the correct results and this correct output is used as a record to test the output for all the subsequent runs of that tests. This is fraught with certain disadvantages * The scripts resulting from this method contain hard-coded values which must change if anything at all changes in the application. * The costs associated with maintaining such scripts are astronomical, and unacceptable. * These scripts may not be reliable, even if the application has not changed, and often fail on replay (pop-up windows, messages, and other things can happen that did not happen when the test was recorded). * If the tester makes an error entering data, etc., the test must be re-recorded. * If the application changes, the test must be re-recorded. What we are seeing here is that all that is being tested are things that already work. Areas that have errors are encountered in the recording process (which is manual testing, after all). These bugs are reported, but a script cannot be recorded until the networking image software is corrected. 5. Viable Automated Testing Methodologies Now that we've observed that Record/Playback may not be viable as a long-term automated testing strategy, let's discuss methodologies that are generally found to be effective for automating functional or system testing for most Networking environment. 5.1. The "Functional Decomposition" Method The main concept behind the "Functional Decomposition" script development methodology is to reduce all test cases to their most fundamental tasks, and write User-Defined Functions, Feature Scripts, and"Sub-routine" or "Utility" Scripts which perform these tasks independent of one another. In general, these fundamental areas include: 1. Navigation : Refers to the total look and feel of the GUI based Test Automation Application. 2. Specific Feature : Refers to the various Networking features under test involved 3. Frame/Packet Data Verification mostly using traffic generator applications. 4. Reporting and Logging for each step performed and output for each in turn. A hierarchical architecture is employed, using a structured or modular design.The highest level is the Driver script, which is the engine of the test. The Driver Script contains a series of calls to one or more "Test Case" scripts. The "Test Case" scripts contain the test case logic, calling the Feature Function scripts necessary to perform the application testing. Utility scripts and functions are called as needed by Drivers, Functional Descritpion of the automation components * Driver Scripts: Perform initialization (if required), then call the Test Case Scripts in the desired order. * Test Case Scripts: Perform the application test case logic using Feature Function Scripts * Feature Function Scripts: Perform specific Feature process implementations within the application; * Subroutine Scripts: Perform application specific tasks required by two or more Feature scripts; * User-Defined Functions: General, Application-Specific, and Screen-Access Functions; Advantages 1. Utilizing a modular design, and using configuration files/Packet Generators to both input and output to verify data, reduces redundancy and duplication of effort in creating automated test scripts. 2. Scripts may be developed while feature development is still in progress. If functionality changes, only the specific "Feature API Function" script needs to be updated. 3. Since scripts are written to perform and test individual Features, they can easily be combined in a "higher level" test script in order to accommodate complex test case scenarios, say in an interop setup. 4. Data input/output and expected results is stored as easily maintainable text records. The user's expected results are used for verification, which is a requirement for System Feature Testing. 5. Functions return "Feature Pass" or "Feature/Test Fail" values to the calling script, rather than aborting, allowing for more effective error handling, and increasing the robustness of the test scripts. This, along with a well-designed "recovery" routine, enables "unattended" execution of test scripts. Disadvantages 1. Requires proficiency in the Scripting language used by the tool (technical personnel). 2. Multiple script-files are required for each Test Case. There may be any number of traffic-data-inputs and verifications required, depending on how many interdependencies and parameters to check for. 3. Tester must maintain the Detail Test Plan with all help and topology files 4. If a simple "text editor" such as Notepad or Vi is used to create and maintain the data-files, careful attention must be paid to the format required by the scripts/functions that process the files, or script-processing errors will occur due to data-file format and/or content being incorrect. 5.2. Test Plan Driven' Method The architecture of the "Test Plan Driven" method appears similar to that of the "Functional Decomposition" method, but there are subtle differences : - Driver Script - Performs initialization, if required; - Calls the Application-Specific "Controller" Script, passing to it the file-names of the Test Cases - The "Controller" Script * Reads and processes the file-name received from Driver; * Matches on "Validations" contained in the test-file * Builds a test report with checks that follow; * Calls "Utility" scripts associated with the "Feature scripts". - Utility Scripts * Process input parameters and functions received from the "Controller" script * Perform specific tasks (e.g. press a key or button, enter data, verify data,etc.), calling "User Defined Functions" if required; * Report any errors to a Test Report for the test case; * Return to "Controller" script; - User Defined Functions *General and Application-Specific functions may be called by any of the above script-types in order to perform specific debug tasks; Cost-Effectiveness - Normally, we have to create a minimum of 3 application-specific utility scripts (a "Controller" script, a "Start_Up" script, and an "End_Test" script). - We may also have to create application-specific versions of several of the available "general" Utility scripts. - It is also usually necessary to develop suitable feature-specific "functions" (depending on how complex or strange the feature under test is). These functions include activating and shutting down the application, logging in, logging out, recovering from unexpected errors ("return to main window"), handling objects that the tool do not recognize, etc. - A number of "prototype" test cases must be created as a "proof of concept". This includes developing an Exhaustive Test Matrix. Sometimes test cases that have already been developed can be used, other times we have to create the test cases ourselves. What this demonstrates is that a Networking Company can implement cost-effective automated testing if they go about it the right way. The Testing Automation provided serves as a multi-edged sword. It does away with all the cost and effort involved in * Manual Testing * Imparting of Training and Development time and Again * Changes to existing topologies * Human Error Involved * Redundancy of process and monotonous workflow. * More Time Consumption * Scalability Constraints * Reusability hiccups * Technical Expertise Work-flow management * Quality Control suffering at the hands of other prioritized factors. * Reliability of Effort and Timeliness in reporting and logs Test Automation Logistics principles 1. In automation planning, as in so many other endeavors, you must keep in mind what situation one is trying to resolve, and the related context. 2. GUI test automation is a significant software development effort that requires architecture, standards, and discipline. The general principles that apply to software design and implementation apply to automation design and implementation. 3. For efficiency and maintainability, we first need to develop an automation structure that is invariant across feature changes; we should develop GUI-based automation content only as features stabilize. 4. We can observe the following sense of patterns of evolution of a company's automation efforts over time: First generalization: In the absence of previous automation experience, most automation efforts evolve through: a. Failure in capture /playback. It doesn't matter whether we're capturing Outputs or Sequences; b. Development of libraries that are maintained on an ongoing basis. The libraries might contain scripted test cases or traffic-driven tests. Second generalization: Common automation initiatives failures are due to: a. Using capture/playback as the principle means of creating test cases; b. Using individually scripted tested cases (i.e. test cases that individuals code on their own, without following common standards and without building common APIs); c. Using poorly designed frameworks. This is a common problem. 5. Straight replay of test cases yields a low percentage of defects. Once the program passes a test, it is unlikely to fail that test Again in the future. This leads to several statements that automated testing can be dangerous because it can give us a false warm and fuzzy feeling that the Networking software is not broken. Even if this image isn't broken today in the ways that it wasn't broken yesterday, there are probably many other ways in which the image is broken. But you won't find them if you keep looking where the bugs aren't. 6. Using the tool more efficiently implies a way of determining whether the program passed or failed the test and doesn't depend on the previously captured output. For example: - Run the same series of tests on the program across different networking image versions or configurations. You may have never tested the program under this particular environment, but one must know how it should work. - Run a function equivalence test. In this case, you run two tests in parallel and feed the same inputs to both. The program that you are testing passes the test if its results always match those of the comparison program. - Instrument the feature under test so that it will generate a log entry any time that the program reaches an unexpected state, e.g makes an unexpected state transition, memory issues, RIB/FIB space, or other resources in an unexpected way, or does anything else that is an indicator of one of the types of errors under investigation. Use the test tool to randomly drive the program through a huge number of state transitions, logging the commands that it executes as it goes. Using this,the tester and the programmer can trace through the log looking for bugs and the circumstances that triggered them. This is a simple example of a simulation. 7. Automation can be much more successful when we collaborated with the programmers to develop hooks, interfaces, and debug output. 8. Most code that is generated by a capture utility is not maintainable and of no long term value. However, the capture utility can be useful when writing a test because it shows how the tool interprets a series of recent events. The script created by the capture tool can give you useful ideas for writing your own code. 9. We should not use screen shots "at all" because they are a waste of time. (Actually, we mean that we hate using screen shots and use them only when necessary. We do find value in comparing small sections of the screen. And sometimes we have to compare screen shots. But to the extent possible, we should be comparing logical results) 10.Don't lose site of the testing in test automation. It is too easy to get trapped in writing scripts instead of looking for bugs. Test Design 11. Automating the easy stuff is probably not the right strategy par se. A large collection of simple, easy-to-pass test cases might look more rigorous than ad hoc manual testing, but a competent manual tester is probably running increasingly complex tests as the program stabilizes. 12. Combining tests can find new bugs (the sum is greater than the parts). 13. There is value in using automated tests that are indeterminate (i.e. random) though we need methods to make a test case determinate. We aren't advocating blind testing. You need to know what test you've run. And sometimes you need to be able to specify exact inputs or sequences of inputs. But if you can determine whether or not the program is passing the tests that you're running, there is a lot to be said for constantly giving it new test cases instead of reruns of old tests that it has passed. 14. We need to plan for the ability to log what testing was done. Some tools make it easier to log the progress of testing, some make it harder. For debugging purposes and for tracing the progress of testing, you want to know at a glance what tests cases have been run and what the results were. Data-driven approach 15. The subject matter (the data) of a data-driven automation strategy might include (for example): o parameters that you can input to the test cycle; o sequences of operations or commands that you make the test execute o sequences of test cases that you drive the cycle through o sequences of traffic states that you drive the test through o Networking standard documents that you have the program read and operate on 16. Traffic-driven approaches can be highly maintainable and can be easier for non-programmers to work with. 17. There can be multiple interfaces that drives traffic-driven testing. You might pick one, or you might provide different interfaces for testers with different needs and skill sets for a particular networking feature. Framework-driven approach 18. The degree to which you can develop a framework depends on the size / sophistication of your team. 19. When you are creating a framework, be conscious of what level you are creating functions at. For example, you could think in terms of operating at one of three levels: o Gui/Cli command level, executing simple commands o Domain level, performing actions on specific things o Feature level, taking care of specific, commonly-repeated tasks. You might find it productive to work primarily at one level, adding test cases for other levels only when you clearly need them. There are plenty of other ways to define and split levels. Analyze the task in whatever way is appropriate. The issue here is that you want to avoid randomly shifting from creating very simple tests to remarkably long, complex ones. 20.Scripts loaded into the Automation Application's API library should generally contain error trapping mechanisms. This is good practice for any type of programming, but it is particularly important for Networking image code because we expect the program that we're testing to be broken, and we want to see the first symptoms of a failure of any major/minor sub condition in order to make reporting and troubleshooting easier. 21.When creating shared API library commands, there are risks in dealing with people's differing scripting and documentation styles. 22.Plug your automation development environment with loads of reusable APIs. 23.It is desirable to include test parameters in data files such as .ini files, settings files, and configuration files rather than as constants embedded into the automation script or into the file that contains the script. 24.Wrappers are a good thing. Use them as often as reasonably possible. Localization 25.The goal of automated localization testing is to show that previously working baseline functionality and networking feature still works. 26.The automation provides only a sanity check level test. Beyond that, we are relying on actual use/manual testing by QA Teams. 27.If baseline and enabling testing is strong enough, the marginal return on making test scripts portable across platforms is rarely worthwhile except for a small set of carefully selected scripts. 28.If translation / localization should be backward compatible. 29.It is important to regress all bugs in a localized version and, to the extent done in the baseline version, to extend automated tests to establish the same baseline for each feature. 30.The kinds of bugs likely to arise during a well-planned localization are unlikely to be detected by baseline regression tests. 6. Imparting Cost and Process Efficiency to Test Automation The following points may need to be incorporated by an erudite organization to reap maximum benefits from the deployment of Test Automation for L2/L3 devices : 1) Distributed Computing to provide 100% utilization of N/W and Server resources. This can be achieved by using different group topologies etc. while they are unused or being underutilized. Also remote servers processing power can be used to enhance the speed of Automation application. 2) Scheduling Runs for particular test features or a whole batch of suite on a timely basis may serve very useful in test automation productivity. The Scheduler service enables tests to be run at appropriate times or critical moments when its not manually possible to initiate them or when there initiation is dependent on some higher level developmental team effort synchronization. Normally, the Queing mechanism is also incorporated to re-schedule jobs which are not completed until specified time and/or based on priorities etc. 3) Utilizing Downtime of a particular network can be used to run specific automation suites requiring a mere subset of the entire configuration which is still active. 4) Providing Application Framework to enable Testers automating new features at the flick of some buttons. 5) Intuitive application working and configuration is ensured through the ease of the total test run flow and enabling the user to enter and interact in all possible ways just with the application being run. Its really annoying to use complex controls or move to the Shell editors every now and then. Even the console monitoring and process control is monitored and provided as an easy to use and implement guide. 6) Global Topology with provision to work on subset and superset. This widens the coverage provided to the various test features being used by enabling them to a actually run on various topologies at one time shuffling the constituent network devices each time they are being run. Also it becomes an easy job to simulate real-life customer test setup scenarios with the exact gateway, core and process network devices configuration possible. 7) Third-Party tools plugin for monitoring. A series of freeware available as third party tools enhance the automation application to a great degree. Usually, since many erudite people have already put in so much of hard work developing the same, it makes sense to just use them as a plug-in instead of putting in the same effort to create them afresh for our use. 8) Logs and Interactive access for each of the runs and individual procedures should be religiously portrayed on the GUI as well as saved to a file for future reference. The readability and debug levels can be used to prevent unwanted information to some users just interested in a brief test result. Color coding can improve the readability too. (Ex: failed tests can be denoted in Red). 9) Provision to proceed on Crash or Error. This prevents the automation to get stuck in case some critical bugs are found. This may be useful when automation is run unattended. (E.g Holidays, lunch breaks etc.) 10) Covering all possible features available makes sure the entire package can be run on any new build as an entry criterion to ensure nothing basic or feature based is not broken at all. Also this relieves the manual QA testing engineers of the harsh testing work they to perform as their exercise of System Verification Testing. 11) Focus on Regression Testing covering all conditions not adhered or time consuming for manual testing 12) Capturing Specialized testing sequences and flows on the console. All the possible traces and flow depictions on the console can be automated to be analyzed suitably by inducting expert decision making functions in the test script itself. 13) Utilizing Database, Latest Platform Independent Languages like Tcl/Tk, Java etc. 14) Using MultiThreading and Object Oriented concepts to pave way for multitasking and plug-in scalability. 15) Integration with the Defect Management System , if any for auto logging of bugs or issues. 16) Mailing facility for logs and reports for specific runs to managers and developers for prompt action. 17) Testing Interoperability between different features Simulating the End-User Setup. 18) Reusability factor involved in coding to enhance automation productivity. 19) Scalable and resilient code 20) Platform Independence of the application. 21) Web-Enabled and multi-repository framework to have open access to all at any place and time. 22) Ability to provide Partial testing by the user by enabling breakpoints, checks and terminations, dormancies amidst a test-case or a whole cycle altogether. 23) An element management system to configure existing provided services, enhanced ones, constructive changes be allowed just at a flick of a button. 24) The entire application should be very user friendly and user- configurable for most of its features in a way that whatever manipulation or trying ideas the end-user has , he should be able to implement and plug-in the same at the flick of a button. 25) The automation should support multiple vendor feature testing of interoperability just as in a real life scenario. 26) Minimize the manual topology changes and total isolation and non-interference on the testing conducted over the remaining topology unused. 27) Levels of testing should be provided with specific features bundled to focus on specific areas at specific times with due brevity of report and speed of operation. 28) Involving variety of traffic generators to catch inherent bugs and prevent the same congenial behavioral output every time. 29) The level of Test Pass criterion should be user configurable so as to enable quick runs with min/medium/max checks at times required. 30) There should be minimum two repositories implemented as Databases, one containing all the static test cases, suites and static configurations and the other containing console logs, results and the other debug outputs to be displayed. 31) Documentation and easy guidelines , pointers, help files etc so that things are comprehended even in absence of the automation engineers. Importance of proper documentation and help guide can never be undermined for test automation development application. 32) Interaction between the developers and the Automation engineers right from the time the feature development is in progress. All important tips be included while designing and implementing the test cases. The immense growth and penetration of the Internet has forced Network companies to rethink and fine-tune their traditional processes in line with the aggresive demands and quality compliance of their customers. 7. Security Considerations This RFC raises no security issues. 8. Authors Address : Vishal Rai Infosys Technologies Ltd. Hinjevadi Pune 411 027 Maharashtra, India Vishal_rai@infy.com Vinay Vasudeva Infosys Technologies Ltd. Hinjevadi Pune 411 027 Maharashtra, India vasudeva@infy.com 9. Acknowledgement We would like to acknowledge Mr. Gopikrishnan Selvarajan from Riverstone Networks and Mr. David Park from Riverstone Networks for their valuable inputs and support. 9. References None. Vinay & Vishal Expires May 2002 Page 18 of 18 INTERNET DRAFT