Secure Software Design and Development

07/19/2021

Supply Chain Risk Management (SCRM) is the implementation of strategies to manage both everyday and exceptional risks along the supply chain based on continuous risk assessment with the objective of reducing vulnerabilities and ensuring continuity.A key concept in SCRM is Supply Chain Interdiction, which, according to Bell, E. J., Autry, C.W., Griffis, S.E., (2015), "refers to activities that constrain or otherwise negatively impact the resource base of a target entity such a corporation, an entire industry, or a whole nation-state. Interdiction activities can cause degradation, disruption, destruction, or denial of access to supplies, and includes counterfeit and intentionally-tampered products being inserted into the supply chain." The following are the results of our seven-weeks work, which culminated in a Test Plan with six main areas: Functional Tests, Unit Tests, Regression Tests, Verification, Validation, and Mitigation. We put together first a document describing the functional requirements.

Test Plan Outcome

Unit Tests:
For each set of functions that implement our Functional Requirements, we wrote Unit Tests that would ensure the function is properly implemented
Functional Tests:
To describe what functional testing will be performing for the application
Regression Tests:
Explain how we implement regression testing to ensure that changes by one developer to one portion of the code will not introduce errors elsewhere
Verification:
Describing our process to verify the software application
Validation:
Describing our process to validate the software application
Mitigation:
​Explain mitigation strategies to address any vulnerabilities or software errors that were not caught using the testing mechanisms implemented as part of the test plan ​

Unit Test

Unit testing as it is applied to reporting and alerting component of the Supply Chain Risk Management (SCRM) system will be focusing on how to complete an action spelled out in the functional requirement in the artifact document. Here we will give a proposed list of test cases, steps to execute, its priority, if manual or automatic, and its outcome. It will not be possible to be as detailed as possible due to the fact that the functional requirement itself has been put together in high level.



Unit Test Cases
Unit Test Cases
Functional Tests
The purpose of the SCRM Functional Testing is to validate that the software meets the functional requirements specified within the design documents. The testing will be dynamic in nature, testing the application in a running state as opposed to reviewing the source code. Each requirement will be tested to verify that the expected output is returned when inputting the appropriate input. For requirements that are not testable in that manner, such as usability requirements, User Acceptance Testing (UAT) will be performed. Any error conditions will be documented and reviewed for suitable error messages.
Test Environment
The test environment will reflect the current standard end-user technology portfolio. Test users will represent each department.
Platform 1: Windows 7 desktop PC, Internet Explorer 11/Firefox Browsers, Wired network
Platform 2: Windows 10 Laptop, Microsoft Edge/Google Chrome Browsers, Wireless network

Test Plan
End users and designated testers will perform the test cases defined in the partial list of Test Cases section below. The results will be documented. Any deviations will be recorded and referred to developers for review. For those that involve other systems such as API integrations, coordination will be established so that testing can be completed.
Partial List of Test Cases
Requirement: Report and Alert Interaction
Users should be able to quickly get started with interacting with reports and alerts with minimal training required.
Expected Results:
To validate that, users can get started interacting with reports and alerts with minimal training.
Requirement: Help Capabilities
The system should contain "help" or documentation capabilities for end-users to supplement training and provide self-service usage assistance.
Expected Results:
End users can easily locate the help feature, navigate through the options, and search for common issues and usage instructions.
Requirement: 180 days of duration
Alerts and reports must be available within the system for at least 180 days
Expected Results:
While difficult to simulate, review system configuration and parameters to within a high degree of certainty that data will be available for at least 180 days.Regression Tests
The purpose of the SCRM system regression test is to validate the functional and non-functional test still complete successfully after any newly developed code is introduced into the Reporting and Alerting engine, and it's subsequent microservices.
System Overview
The SCRM Reporting and Alerting Engine provides a robust solution for supply chain management by coupling closely with the existing Taxonomy Builder, Fusion Engine, and Tracker Analytics systems. The engine capability's leverage RESTful APIs that can be extended by third parties and end-users to provide reports in any format and be consumed by any existing framework used for reporting and generating alerts.


It is difficult to automate the comparison of test results with expected test results when the test data is complex, such as the format of a report, or contents of a region of memory. To simplify the comparison, we declare the expected test results as a "Golden" file. A "Golden" file is considered truth after the file has been analyzed to be adherent to the system requirements, and controlled documents, and executed multiple times with the same repeatable results. Formal test procedure results are then compared to the "Golden" file, and the two must match. Golden files used for check test output are included as test inputs.

Golden File Concept

It is difficult to automate the comparison of test results with expected test results when the test data is complex, such as the format of a report, or contents of a region of memory. To simplify the comparison, we declare the expected test results as a "Golden" file. A "Golden" file is considered truth after the file has been analyzed to be adherent to the system requirements, and controlled documents, and executed multiple times with the same repeatable results. Formal test procedure results are then compared to the "Golden" file, and the two must match. Golden files used for check test output are included as test inputs.


  • Requirements traceability: If the documentation for the Supply Chain Risk Management (SCRM) includes requirements that in some way pertain to a particular method, then that method's test series will be tailored as much as possible, to test for the method's compatibility with those requirements.
  • Items or components tested: It is impossible to list every method that will be tested, because not every non-trivial method will be designed or brainstormed beforehand. Suffice it to say that a test series will be written for every non-trivial method that does not require user input and that we will design our software in such a way that nontrivial functions will be separated from user input as much as possible.-
  • Testing schedule and resources: Since this form of testing will be performed as we go, we will not set up a schedule for it. It should, however, take about as much time (and quite possibly more) to write these test series as writing the methods that they test. The idea is that even though most of these tests will probably succeed most of the time, by automating the testing process, we can easily avoid doing a great deal more work in the long run.
  • Hardware and software requirements: COTS/FOSS software, may be required to run the test cases in series.
  • Constraints: The one important exception to this form of testing is the User Interface. It is nearly impossible to test basic user interface functionality without user interaction. Although the GUI methods are to be kept separate from as much of the non-trivial functionality as possible, the GUI developer will need to perform some manual checking to ensure that each button performs the desired function.
Database
• Database Software
• Features to be tested
Although this is an important component of our project, it is a COTS/FOSS software, and so our testing will only involve making sure that our software can use it to perform the appropriate database functions.
Testing approach:
Since this is third-party database software, we will test our own software's integration with it, rather than the database software itself.
Pass/Fail criteria: This component will automatically "pass" if the Server, Server Software Proxy, Reporting Application, and Class Library all pass their tests that relate to the database
Mobile Client
• Mobile Client
• Features to be tested
At the component testing stage, the User Interface will only receive basic testing (make sure buttons work and bring up correct windows, etc.) and most of the testing will be based on the back-end methods that call the Server Procedure Proxy's methods to populate the GUI with the intended information.
Testing approach:
Test series will be used to test the functionality of each non-trivial back-end function.
Human testing will be used to test the GUI functions.
Because the Client's actual functionality is so dependent on all of the other components working correctly, the Client application cannot be tested as a whole until the entire system can be tested.
Pass/Fail criteria: If the test series all pass, and if the basic windows appear as they should when the user taps the appropriate buttons, this test is considered to pass. If not, it fails.
References:
The SDS provides more information on the Mobile Client's intended functionality.
Desktop Client
• Desktop Client
• Features to be tested
At the component testing stage, the User Interface will only receive basic testing (make sure buttons work and bring up correct windows, etc.) and most of the testing will be based on the back-end methods that call the Server Procedure Proxy's methods to populate the GUI with the intended information.
Testing approach:
Test series will be used to test the functionality of each non-trivial back-end function.
Human testing will be used to test the GUI functions.
Because the Client's actual functionality is so dependent on all of the other components working correctly, the Client application cannot be tested as a whole until the entire system can be tested.
• Pass/Fail criteria: If the test series all pass, and if the basic windows appear as they should when the user taps the appropriate buttons, this test is considered to pass. If not, it fails.
Validation
In the software validation process, we will be evaluating the final product, i.e. (Reporting & Alerting Software ), to check whether the software meets the customer expectations and requirements.
Software Design Validation
  • Validate the number of servers required to support the volume of logs
  • Validate the software architecture design with the govt agencies to avoid any problems later relating to scalability or performance. Ask the agencies to provide a capacity plan that you can use as a scalability roadmap.
  • Validate the design log aggregation points into the architecture.
  • Allow for a Development Manager/DB in your architecture. It is possible to crash/lag a system in the process of creating reporting & alerting content (rules, reports, etc.). Having a non-production system to build and test content on will pay big dividends the first time something being written fails and forces a manager to restart.
  • Validate reporting & alerting engine network connectivity
  • Validate the reporting & alerting engine database
  • Determine the disk space requirements for your reporting & alerting engine database(es)
  • Train the Implementation team to test the reporting and alerting engine 's dashboard

Functionality Validation
  • Configure Reporting and Alerting software engine so it can start accepting logs and alerts from third-party
  • Configure it to transmit events
  • Validate events are being received at the manager from the agents
  • Check to see all expected events are being received
  • Validate the events that are being parsed and classified properly
  • Validate the Reporting and Alerting software is processing events properly
  • Validate data normalization
  • Validate correlation function
  • Validate database archiving capability
  • Validate database restore functionality

Deployment Validation
  • Provide access to test user community
  • Have these users validate content suitability for their assigned roles
  • Build production accounts for user community
  • Build accounts with appropriate rights
  • Disseminate user accounts and software
  • Training of End Users and SOC Personnel in Security Operation Centers in various functions.
  • Migrate business processes to this new Reporting & Alert environment
  • Integrate into the CSIRT/Incident Handling process
  • Educate internal groups on capabilities and limitations of the Reporting & Alerting software product; this can include Audit, Management, and especially Legal.
  • If possible (based on regulatory and legal requirements), develop white list-based filtering to prevent your database from getting filled with useless events. While you would like to have every log at your fingertips, the cost is storage, and bandwidth can be exorbitant. Determine some local tradeoffs and filter at the log collection point to reduce overhead.
User Experience Validation
  • Provide test accounts to partners who will be using the system then give the playbook how to use the dashboard for viewing threat information
  • Let them write rules and observe logs and alerts
  • Collect the feedback from partners, and govt agencies end user to see if there is a room for improvement in terms of user experience
Mitigation ​Though not every vulnerability or software error will always be identified through our testing processes we aim to mitigate risk of these issues through proper planning and execution. Utilizing a risk management process and lifecycle testing of our software, the overall risk of major vulnerabilities or code errors will be significantly lowered.




Risk Management process
Risk Management process

Compared to a traditional waterfall software project, scope on our agile project is not rigidly managed but is left flexible. The key determinant of success on our project is customer satisfaction. Our software development teams embrace change and understand that requirements will evolve throughout a project, which is why agile methodologies allow requirements to be defined iteratively in the product backlog. We utilize the Scrum methodology, where the product backlog is an ordered list of requirements that the Scrum team maintains for each product. The backlog changes as business conditions change, technology evolves, or new requirements are defined. Continuous customer involvement is necessary on agile projects since the customer must prioritize the requirements and make the final decision about which ones will be addressed in each new iteration.
Agile Strategy for Managing BugsThere are two general strategies for managing software bugs on an agile project. When a bug is detected, the first order of business is to try to determine how critical it is and what impact it will have on the functionality of the application or entire system. Our generally accepted taxonomy has the following severity levels:
Severity 1: An error that prevents the accomplishment of an operational or mission-essential function, prevents the operator/user from performing a mission-essential function, or jeopardizes personnel safety.Severity 2: An error that adversely affects the accomplishment of an operational or mission-essential function and for which no acceptable alternative workarounds are available.Severity 3: An error that adversely affects the accomplishment of an operational or mission-essential function for which acceptable alternative workarounds are available.Severity 4: An error that is an operator/user inconvenience and affects operational or mission-essential functions.Severity 5: All other errors.Critical bugs, or showstoppers as they are often called, are so severe that they prevent us from further testing. But a critical bug that, for example, causes an application to crash may be a low priority if it happens very rarely. Priority for fixing bugs should be based on the risk potential of the bug. On fast-paced agile projects, bug fixes for low severity bugs often get low priority and are usually only scheduled when time is available. Risk-based software testing looks at two factors - the probability of the bug occurring and the impact of the bug when it occurs. High impact/high probability bugs fixes should be scheduled first: for example, a bug in a particular module of code for an on-line shopping cart algorithm that keeps your business from processing transactions. On the other hand, a bug that introduces a very slight rounding error in that same transaction should have a lower priority. As shown in figure we utilize a risk matrix for measuring the severity of the bug or vulnerability and address each one as applicable.

Risk Matrix
Risk Matrix

A Second Agile Strategy for Managing Bugs

The second general strategy for managing software bugs on agile projects is to avoid them in the first place. There is a school of thought that says that a problem caught in development is not a bug since the software is still being worked on.
Agile is all about short, flexible development cycles that respond quickly to customer demand. Continuous Delivery (CD) is a software development strategy that optimizes our delivery process to get high-quality software into the hands of our customers as quickly as possible. The notion of releasing a prototype, or minimum viable product (MVP), is crucial for getting early feedback. Once the MVP is released, we're then able to get feedback by tracking usage patterns, which is a way to test a product hypothesis with minimal resources right away. Every release going forward can then be measured for how well it converts into the user behaviors we want the release to achieve. The concept of a baseline MVP product that contains just enough features to solve a specific business problem will also reduce wasted engineering hours and a tendency for feature creep that often leads to buggy software.
Agile Projects
Agile Projects

Agile project management accommodates change as the need arises, and scope, schedule, and/or cost vary as required. This flexibility comes because agile teams engage with customers and other stakeholders throughout the project--to do things like prioritizing bug fixes and enhancements in the team's backlog-- and not just at the end of the project.
On agile projects, quality comes through collaboration. Effective collaboration is vital for prioritizing bugs in the risk-based software testing approach described above, as well as throughout the entire Continuous Delivery process. An agile team that uses a test management tool that allows them to work collaboratively in real-time will also be able to recognize defects in their products and mitigate them more quickly. This shortens development and testing cycles, improves team efficiency, and reduces the time it takes the team to bring high-quality software to market.

Reference

  • Wieland, A., Wallenburg, C.M., 2012. "Dealing with supply-chain risks: Linking risk management practices and strategies to performance." International Journal of Physical Distribution & Logistics Management, 42(10).
  • Bell, E. J., Autry, C.W., Griffis, S.E., 2015. "Supply Chain Interdiction as a Competitive Weapon." Transportation Journal, Vol. 54, No.1, Winter 2015
  • Wieland, A., Wallenburg, C.M. (2012, February 16). Dealing with supply chain risks. Retrieved from Emerald Website: Dealing with supply chain risks

Reflection

Secure Software Design and Development course was one of the best courses of MS Cyber Security Operation and Leadership program at University of San Diego (USD). It brought an aspect of collaboration among peers who have different backgrounds and the way we worked together was simply an amazing experience. Initially I felt like i won't be able contribute much since I don't have development background but later I was able to propose the design of the program i.e. leveraged SPLUNK engine.
As we know on the past, it was common practice to perform security-related activities only as part of testing and perhaps security was considered as separate island. This resulted in a high number of issues discovered too late (or not discovered at all). It is a far better practice to integrate activities across the SDLC to help discover and reduce vulnerabilities early, effectively building security in. It was great to understand how a secure SDLC process ensures that security assurance activities such as penetration testing, code review, and architecture analysis are an integral part of the development effort. The primary advantages of pursuing a secure SDLC approach are:
  • More secure software as security is a continuous concern
  • Awareness of security considerations by stakeholders
  • Early detection of flaws in the system
  • Cost reduction as a result of early detection and resolution of issues
  • Overall reduction of intrinsic business risks for the organization
If we go deep dive in SDLC, in my opinion security principles are applied into all phases of the typical SDLC:
In the Requirements phase, security teams get involved in early product conversations to build an initial risk assessment.
Design involves threat modeling and identifying risks in the architecture and what resources could be compromised
In the Coding process developers use secure coding libraries and follow best practices to ensure that they are doing things like input, query and token validation and may use SAST tools to identify possible vulnerabilities.
In the Testing phase, developers, QA and security teams may use SAST, DAST and IAST tools, fuzzers and also penetration test the application. When potential vulnerabilities and/or bugs are found, then the results are delivered back to the development team. Finally the product is eventually released to pre-production where it undergoes configuration and networking review prior to the move to production.
Developing or designing software is just not a technical endeavor alone, but an ethical one because decisions made during development have the potential for affecting lives. e.g. healthcare software's, mission critical apps etc. I agree with Olga. V. Mack that the field of software development needs a more intentional, mature, and consistent ethical framework. There are many ways to design such a framework, but a formal and mandatory self-regulated model may be a good place to start because there are mature examples to follow from other industries. Professions that formally and mandatorily self-regulate, such as lawyers and doctors, provide models as to how software development can be professionalized with mature credentials process, ethics standards, continuing education requirements, violation adjudication process and body, and periodic guidance.
Independent of framework specifics, it is time for the software industry to stop hoping, start guiding software developers about ethics, and ultimately hold software professionals accountable. After all, we all depend on software developers to build an inhabitable world without unpleasant surprises.
Ranjan Kunwar - Capstone
All rights reserved 2021
Powered by Webnode
Create your website for free!