Getty Images/iStockphoto

Summit speakers break down new era of developing software

Top thought leaders discuss their strategies to choose the right software development process and to best utilize AI and ML in this comprehensive BrightTALK summit.

Software development is a critical process to get right. Businesses are more dependent on their software applications than ever before, generating much revenue and exposure from software, making the proper methodology clutch to their success. At February's BrightTAK summit Software Development Methodologies, six diverse thought leaders provided viewers with tips on selecting the right approach as well as how to integrate new technology like artificial intelligence (AI) and machine learning (ML) into your process to produce the highest quality software.

Determine the right software development model

The summit commenced with Yeghisabet Alaverdyan, head of systems integration at Armenia-based consultancy EKENG CJSC, discussing the seven stages of the software development lifecycle:
1. Planning and feasibility.
2. Requirement analysis.
3. Software design phase.
4. Code development phase.
5. Software validation and testing.
6. Software deployment.
7. Software maintenance and support legacy code.

Alaverdyan also provided overviews of commonly used software development process models such as the Waterfall model and the agile process, and encouraged attendees to explore software reuse as a valid approach. She wrapped up her talk reminding viewers that the right approach can vary depending on project management status, goals, budget, team structure and experience.

With testing an essential component of the software development process, author and professor Mauricio Aniche offered viewers applicable tips for how to design a scalable test strategy. The first step was scaling where he encouraged companies to focus on fast tests. "Most of us don't work for Google and don't have google-scale infrastructure." He advised companies to be careful with shared resources and reduce the scope of the tests with the overall aim of making large-scale testing as seamless as possible. "Divide to conquer. Don't try to catch complicated bugs with integration tests. You're not that good or creative!" Aniche encouraged viewers to focus on pre-merge tests and define their own test pyramid, in other words, what should be tested and at what level. These tests help improve the developer experience and prevent broken code from being pushed out. Another suggestion: Make it easy to write tests. "Nobody spends time on testing that is difficult." Once the testing process is complete, that is just one step of the process. Companies must continue their coordination efforts beyond testing.

Anastasiia Syrota, project manager at Intellectsoft, went over the top 12 software development approaches her company has utilized. Her comprehensive talk broke down each model along with their benefits and drawbacks and how viewers can choose the best one considering their budget, team size, project size and customer expectations. She covered the following approaches:

  1. Waterfall model.
  2. Agile model.
  3. Spiral model.
  4. Rational unified approach (RUP).
  5. Prototyping.
  6. Joint application development (JAD).
  7. Rapid application development (RAD).
  8. Dynamic systems development method (DSDM).
  9. Feature-driven development (FDD).
  10. Lean.
  11. Extreme programming (XP).
  12. Scrum.

Agile and Scrum are the most popular of these methods as they are both simple, flexible and well-suited for fast-moving environments.

Challenges in putting GenAI into use

Users noticed generative AI (GenAI) had a similar problem as did the humans that produced it: cognitive bias. Software quality engineer Gerie Owen explored how to test and recognize cognitive bias in AI as well as how to manage the problematic outcomes resulting from biased data. She explained bias originates from data and data originates from humans, for whom bias can be inescapable. One example of biased data creating real-life problems and perpetuating unfairness is when Amazon used AI tools to scan the web for potentially strong job candidates. The training data used were resumes submitted for past roles. Since these candidates were mostly male, the AI "learned" that male candidates were superior for these roles. This is an example of latent bias where two concepts become incorrectly correlated and promote stereotyping. Another example is interaction bias where the AI focuses on words that might apply incorrectly or it is not "trained" to recognize offensive behavior such as Microsoft Tay. Biases matter because they result in wrong answers, subtle discrimination and suboptimal results. Another troubling example is a London-based doctor whose key card would not allow her to enter into the fitness center locker room because "doctor' had been coded as synonymous with "male."

Owen encouraged viewers not to shy away from the bigger questions AI use presents to companies. As society slowly grows more interlinked with AI, data testing is critical to weed out biases that can dramatically impact results. Owen encourages to test data using real-life data and personas along with human decision-makers. "Do we want AI to replace human intelligence and decisions? Do we accept bias if it's consistent with human decision-making? Software behaves in unexpected ways. Machines can respond too fast before humans have time to intervene. You can't make intelligence from code. Algorithms can't actually think like humans. If we choose the data, it will likely be biased."

Christopher Tozzi, lecturer at Rensselaer Polytechnic Institute, discussed open source licensing concerns AI users grapple with when using this technology to aid in software coding. Tozzi began by explaining the relatively long history of the open source community becoming embroiled in lawsuits involving intellectual property. With the exploding use of GenAI coding assistants in software development, companies must contend with whether they are violating open source licenses.

According to Tozzi, "Maybe. It depends." It depends on whether the use of the code qualifies as copying, how much code a user needs to copy to violate the license, and whether GenAI tool vendors are even aware of how their product(s) use open source. Making it trickier, most open source licensing agreements don't define how much code developers can copy, creating problems as open source developers don't want their code misused, and for developers using genAI tools who may be using other people's code without realizing it, placing their employers in a legal conundrum. Tozzi offered guidance for both developers and businesses to minimize copying AI-created code, track you and where you use this code, scan codebases for open source code, and lastly, follow the licensing lawsuits involving some major players like The New York Times, OpenAI and Microsoft to get a better sense of where open source AI use will go. "As of now, nobody is right because this is such a fluid and dynamic issue."

With all the hype surrounding AI, will this technology live up to its lofty expectations? Author and managing director Matt Heusser took on this big question in his talk. Heusser pointed to the Gartner Hype Cycle in relation to AI where a new technology starts off with high anticipation, peaks with inflated expectations, enters a period of disillusionment, then a slope of enlightenment and then ends with a plateau of productivity. "We are toward the end of the peak of inflated expectation, where negative press has begun." He warns viewers to be leery of things touted as amazing and transformational as ChatGPT is known to generate cliches and repeated words and is only as intelligent as the data powering the tool. These same products do hold huge potential for testing purposes in the form of test data generation, code generation, analysis of requirements, and generating unit tests. It is imminent that the test data is right before you implement AI. Heusser concluded his talk by advising viewers to be a skeptic. "Ask for examples. Talk to the person who did the work. Ask for samples and ask if they'll work for you. In other words, be a tester."

Follow BrightTALK to discover more valuable tips to enhance your IT environment from all angles.

Alicia Landsberg is a senior managing editor on the BrightTALK summits team. She previously worked on TechTarget's networking and security group and served as senior editor for product buyer's guides.

Dig Deeper on Agile, DevOps and software development methodologies

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close