All Comments
TopTalkedBooks posted at August 19, 2017

There is a whole book on it .

The tl;dr is you write a bunch of unit tests. Treat the legacy system as a black box with inputs and outputs. Write tests that cover all of those inputs and outputs. Once you have good code coverage you are free to refactor. You could replace your reimplementation of unordered_map with the real version. If the tests pass you are golden.

Of course, the real trick, and the really hard part, is to push back and never let those hacks get into the code base in the first place. Since no one person owns the code, and turnover also causes many hands to touch it, the only way for it to work is to be company policy. There needs to be strict code reviews that reject hacks and messy code. It also needs to be OK to miss as sprint release due to rejected code. If it's not OK, well then have fun with the spaghetti code.

TopTalkedBooks posted at August 19, 2017

Its never easy working with legacy code. I would say start a new project if the code base is small, or update the existing one if it is large and add in spring boot or equivalent. Get mvn setup. Get jenkins working on it. You are now 1 step closer.

Password issue need to be addressed immediately.

Introduce some high level testing, it will be difficult to unit test everything. Start with end to end integration tests and slowly introduce more testing as you go. Any new features should test the class they are part of.

Next step is to use your tooling to help remove dupe code, intellij will highlight all dupes. Start thinking about removing these. Fix the non critical issues, exceptions, jsp/servlets.

It'll be slow and painful to get your test coverage up to a good level.


TopTalkedBooks posted at August 19, 2017

Working Effectively with Legacy code is great. It's not specifically a Java book, but I've found it very useful.

TopTalkedBooks posted at August 19, 2017
Essentially, my understand of best practice is to write high level functional tests for the features that appear to work and then use them to ensure there are no regressions as a result of your changes. Someone people even define legacy code as "code without tests".

TopTalkedBooks posted at August 19, 2017

Working with legacy systems is a black art that I didn't learn about until I took a job supporting and extending one such system. The book I link to above was critical in helping me to understand the approach taken by the team I was working with. It takes a keen, detail-focused mind to do this kind of work.

The approach we took was to create a legacy interface layer. We did this by first wrapping the legacy code within a FFI. We built a test-suite that exercised the legacy application through this interface. Then we built an API on top of the interface and built integration tests that checked all the code paths into the legacy system. Once we had that we were able to build new features on to the system and replace each code path one by one.

Unsurprisingly we actually discovered bugs in the old system this way and were able to correct them. It didn't take long for the stakeholders to stop worrying and trust the team. However there was a lot of debate and argument along the way.

The problem isn't technical. You can simultaneously maintain and extend legacy applications and avoid all of the risks stakeholders are worried about. One could actually improve these systems by doing so. The real problem is political and convincing these stakeholders that you can minimize the risk is a difficult task. It was the hardest part of working on that team -- even when we were demonstrating our results!

The hardest part about working with legacy systems are the huge bureaucracies that sit on top of them.

TopTalkedBooks posted at August 19, 2017
Working Effectively With Legacy Code by Michael Feathers

Debugging with GDB: The GNU Source-Level Debugger by Stallman, Pesch, and Shebs

The Art of Debugging with GDB, DDD, and Eclipse by Matloff & Salzman

TopTalkedBooks posted at August 19, 2017
There are even books about dealing with legacy code. I've found this one to be useful:

Working Effectively with Legacy Code, by Michael Feathers

TopTalkedBooks posted at August 19, 2017
> I've also seen people mangle well-factored but untestable code in the process of writing tests, which can be a tragedy when dealing with a legacy codebase that was written with insufficient testing but is otherwise well-designed.

Have you read Michael Feathers' Working Effectively with Legacy Code? [0]

In his definition of legacy code it is any such code that has no test coverage. It's a black box. There are errors in it somewhere. It works for some inputs. However you cannot quantify either of those things just by "inhabiting the mind of the original developers." The only way to work effectively with that code base in order to extend it, maintain it, or modify it is to bring it under test.

This is far more difficult than it sounds with legacy code than with greenfield TDD for the aforementioned reasons: there are unquantified errors and underspecified behaviours. You can't possibly do it in one sweeping effort and so the strategy is to accept that tests are useful and to add them with each change, first, before making that change and using the test to prove the change is correct.

Slowly, over time, your legacy code base surfaces little islands of well tested code.

You have to be deliberate and careful. You have to think about what you're doing.

This is a much different experience than writing greenfield code. TDD is effortless and drives you towards the answer in this case.


TopTalkedBooks posted at August 19, 2017
So as to be constructive, I'm going to reference a classic: Working Effectively With Legacy code [0]. Here's a nice clip from an SO answer [1] paraphrasing it:

"To me, the most important concept brought in by Feathers is seams. A seam is a place in the code where you can change the behaviour of your program without modifying the code itself. Building seams into your code enables separating the piece of code under test, but it also enables you to sense the behaviour of the code under test even when it is difficult or impossible to do directly (e.g. because the call makes changes in another object or subsystem, whose state is not possible to query directly from within the test method).

This knowledge allows you to notice the seeds of testability in the nastiest heap of code, and find the minimal, least disruptive, safest changes to get there. In other words, to avoid making "obvious" refactorings which have a risk of breaking the code without you noticing - because you don't yet have the unit tests to detect that.".

As you get more experience under your belt, you'll begin to see these situations again and again of code becoming large, difficult to reason about or test, and similarly having low direct business benefit for refactoring. But crucially, learning how to refactor as you go is a huge part of working effectively with legacy code and by virtue of that, maturing into a senior engineer -- to strain a leaky analogy, you don't accrue tech debt all at once, so why would it make sense to pay it off all at once? The only reason that would occur is if you didn't have a strong culture of periodically paying off tech debt as you went along.

I'm not going to insinuate that it was necessarily wrong that you decided to solve the problem as you did, and the desire to be proactive about it is certainly not something to be criticized. But it wasn't necessarily right, either. Your leadership should have prevented something like this from occurring, because in all likelihood, you wasted those extra hours and naively thought that extra hours equal extra productivity. They don't. You ought to aim for maximal results for minimal hours of work, so that you can spend as much time as you can delivering results. And, unless you're getting paid by the hour instead of salaried, you're actually getting less pay. So to recap: you're getting less pay, you're giving the company subpar results (by definition, because you're using more hours to achieve what a competent engineer could do with only 40 hour workweeks so you're 44% as efficient), and everyone's losing a little bit. Thankfully, you still managed to get the job done, and because you were able to gain authorship and ownership over the new part of the codebase, you were able to politically argue for better compensation. Good for you, you should always bargain for what you deserve. But, just because you got a more positive outcome doesn't mean you went about it the most efficient way.

The best engineers (and I would argue workers in general) are efficient. They approach every engineering problems they can with solutions so simple and effective that they seem boring, only reaching for the impressive stuff when it's really needed, and with chagrin. If you can combine that with self-advocacy, you'll really be cooking with gas as far as your career is concerned. And, it'll get you a lot further than this silly childish delusion that more hours equals more results, or more pay. Solid work, solid negotiation skills, solid marketing skills and solid communication skills earn you better pay. The rest is fluff.

[0] [1]

TopTalkedBooks posted at August 20, 2017

As others already replied, it's late to write unit tests, but not too late. The question is whether your code is testable or not. Indeed, it's not easy to put existing code under test, there is even a book about this: Working Effectively with Legacy Code (see key points or precursor PDF).

Now writing the unit tests or not is your call. You just need to be aware that it could be a tedious task. You might tackle this to learn unit-testing or consider writing acceptance (end-to-end) tests first, and start writing unit tests when you'll change the code or add new feature to the project.

TopTalkedBooks posted at August 20, 2017

Working Effectively with Legacy Code by Michael Feathers (also available in Safari if you have a subscription) is an excellent resource for your task. The author defines legacy code as code without unit tests, and he gives practical walkthroughs of lots of conservative techniques—necessary because you're working without a safety net—for bringing code under test. Table of contents:

  • Part: I The Mechanics of Change
    • Chapter 1. Changing Software
      • Four Reasons to Change Software
      • Risky Change
    • Chapter 2. Working with Feedback
      • What Is Unit Testing?
      • Higher-Level Testing
      • Test Coverings
      • The Legacy Code Change Algorithm
    • Chapter 3. Sensing and Separation
      • Faking Collaborators
    • Chapter 4. The Seam Model
      • A Huge Sheet of Text
      • Seams
      • Seam Types
    • Chapter 5. Tools
      • Automated Refactoring Tools
      • Mock Objects
      • Unit-Testing Harnesses
      • General Test Harnesses
  • Part: II Changing Software
    • Chapter 6. I Don't Have Much Time and I Have to Change It
      • Sprout Method
      • Sprout Class
      • Wrap Method
      • Wrap Class
      • Summary
    • Chapter 7. It Takes Forever to Make a Change
      • Understanding
      • Lag Time
      • Breaking Dependencies
      • Summary
    • Chapter 8. How Do I Add a Feature?
      • Test-Driven Development (TDD)
      • Programming by Difference
      • Summary
    • Chapter 9. I Can't Get This Class into a Test Harness
      • The Case of the Irritating Parameter
      • The Case of the Hidden Dependency
      • The Case of the Construction Blob
      • The Case of the Irritating Global Dependency
      • The Case of the Horrible Include Dependencies
      • The Case of the Onion Parameter
      • The Case of the Aliased Parameter
    • Chapter 10. I Can't Run This Method in a Test Harness
      • The Case of the Hidden Method
      • The Case of the "Helpful" Language Feature
      • The Case of the Undetectable Side Effect
    • Chapter 11. I Need to Make a Change. What Methods Should I Test?
      • Reasoning About Effects
      • Reasoning Forward
      • Effect Propagation
      • Tools for Effect Reasoning
      • Learning from Effect Analysis
      • Simplifying Effect Sketches
    • Chapter 12. I Need to Make Many Changes in One Area. Do I Have to Break Dependencies for All the Classes Involved?
      • Interception Points
      • Judging Design with Pinch Points
      • Pinch Point Traps
    • Chapter 13. I Need to Make a Change, but I Don't Know What Tests to Write Characterization Tests
      • Characterizing Classes
      • Targeted Testing
      • A Heuristic for Writing Characterization Tests
    • Chapter 14. Dependencies on Libraries Are Killing Me
    • Chapter 15. My Application Is All API Calls
    • Chapter 16. I Don't Understand the Code Well Enough to Change It
      • Notes/Sketching
      • Listing Markup
      • Scratch Refactoring
      • Delete Unused Code
    • Chapter 17. My Application Has No Structure
      • Telling the Story of the System
      • Naked CRC
      • Conversation Scrutiny
    • Chapter 18. My Test Code Is in the Way
      • Class Naming Conventions
      • Test Location
    • Chapter 19. My Project Is Not Object Oriented. How Do I Make Safe Changes?
      • An Easy Case
      • A Hard Case
      • Adding New Behavior
      • Taking Advantage of Object Orientation
      • It's All Object Oriented
    • Chapter 20. This Class Is Too Big and I Don't Want It to Get Any Bigger
      • Seeing Responsibilities
      • Other Techniques
      • Moving Forward
      • After Extract Class
    • Chapter 21. I'm Changing the Same Code All Over the Place
      • First Steps
    • Chapter 22. I Need to Change a Monster Method and I Can't Write Tests for It
      • Varieties of Monsters
      • Tackling Monsters with Automated Refactoring Support
      • The Manual Refactoring Challenge
      • Strategy
    • Chapter 23. How Do I Know That I'm Not Breaking Anything?
      • Hyperaware Editing
      • Single-Goal Editing
      • Preserve Signatures
      • Lean on the Compiler
    • Chapter 24. We Feel Overwhelmed. It Isn't Going to Get Any Better
  • Part: III Dependency-Breaking Techniques
    • Chapter 25. Dependency-Breaking Techniques
      • Adapt Parameter
      • Break Out Method Object
      • Definition Completion
      • Encapsulate Global References
      • Expose Static Method
      • Extract and Override Call
      • Extract and Override Factory Method
      • Extract and Override Getter
      • Extract Implementer
      • Extract Interface
      • Introduce Instance Delegator
      • Introduce Static Setter
      • Link Substitution
      • Parameterize Constructor
      • Parameterize Method
      • Primitivize Parameter
      • Pull Up Feature
      • Push Down Dependency
      • Replace Function with Function Pointer
      • Replace Global Reference with Getter
      • Subclass and Override Method
      • Supersede Instance Variable
      • Template Redefinition
      • Text Redefinition
  • Appendix: Refactoring
    • Extract Method
TopTalkedBooks posted at August 20, 2017

The first intermediate goal is to get a good picture of what exceptions are being ignored and where; for that purpose, you can simply add logging code to each of those horrid catch-everything blocks, showing exactly what block it is, and what is it catching and hiding. Run the test suite over the code thus instrumented, and you'll have a starting "blueprint" for the fixing job.

If you don't have a test suite, then, first, make one -- unit tests can wait (check out Feathers' great book on working with legacy code -- legacy code is most definitely your problem here;-), but you need a suite of integration tests that can be run automatically and tickle all the bugs you're supposed to fix.

As you go and fix bug after bug (many won't be caused by the excessively broad catch blocks, just hidden/"postponed" by them;-), be sure to work in a mostly "test-driven" manner: add a unit test that tickes the bug, confirm it breaks, fix the bug, rerun the unit test to confirm the bug's gone. Your growing unit test suite (with everything possible mocked out or faked) will run fast and you can keep rerunning cheaply as you work, to catch possible regressions ASAPs, when they're still easy to fix.

The kind of task you've been assigned is actually harder (and often more important) than "high prestige" SW development tasks such as prototypes and new architectures, but often misunderstood and under-appreciated (and therefore under-rewarded!) by management/clients; make sure to keep a very clear and open communication channel with the stakeholder, pointing out all the enormous amount of successful work you're doing, how challenging it is, and (for their sake more than your own), how much they would have saved by doing it right in the first place (maybe next time they will... I know, I'm a wild-eyed optimist by nature;-). Maybe they'll even assign you a partner on the task, and you can then do code reviews and pair programming, boosting productivity enormously.

And, last but not least, good luck!!! -- unfortunately, you'll need it... fortunately, as Jefferson said, "the harder I work, the more luck I seem to have";-)

TopTalkedBooks posted at August 20, 2017

Example from Book: Working Effectively with Legacy Code

Also given same answer here:

To run code containing singletons in a test harness, we have to relax the singleton property. Here’s how we do it. The first step is to add a new static method to the singleton class. The method allows us to replace the static instance in the singleton. We’ll call it setTestingInstance.

public class PermitRepository
    private static PermitRepository instance = null;
    private PermitRepository() {}
    public static void setTestingInstance(PermitRepository newInstance)
        instance = newInstance;
    public static PermitRepository getInstance()
        if (instance == null) {
            instance = new PermitRepository();
        return instance;
    public Permit findAssociatedPermit(PermitNotice notice) {

Now that we have that setter, we can create a testing instance of a PermitRepository and set it. We’d like to write code like this in our test setup:

public void setUp() {
    PermitRepository repository = PermitRepository.getInstance();
    // add permits to the repository here
TopTalkedBooks posted at August 20, 2017

Just using interfaces and DI container does not mean that you writing loosely coupled code. Interfaces should be used at application Seams, not for Entities:

A seam is a place where you can alter behaviour in your program without editing in that place

From Mark Needham:

... we want to alter the way that code works in a specific context but we don’t want to change it in that place since it needs to remain the way it is when used in other contexts.

Entities (domain objects) are the core of your app. When you change them you change them in place. Building Seam around your data access code however is a very good idea. It is implemented using Repository pattern. Linq, ICreteria, HQL is just an implementation detail that is hidden from consumers behind domain driven repository interface. Once you expose one of these data access technologies your project will be coupled to them, and will be harder to test. Please take a look at these two articles and this and this answers:

TopTalkedBooks posted at August 20, 2017

Your question is very broad as there are many techniques (e.g. heavily unit testing your code) that are of help. It is probably best to read a book on that topic. I can highly recommend you Michael C. Feathers

Working Effectively with Legacy Code

enter image description here

Although this book is mostly Java-centric the described techniques are generally applicable. It will definitely change the way you write and think about code and will be of help when working with existing applications.

Feathers' book is also one of the books that are most recommended in this SO post.

TopTalkedBooks posted at August 20, 2017

If this was me I would not focus on unit tests for this. I would first try and get a suite of End-To-End tests which characterize the behaviour of the system as it stands. Then as you refactor parts of the system you have some confidence that things are no more broken that they were before.

As you point out, different linq providers have different behaviour so having the end to end tests will ensure that you are actually testing the the system works.

I can recommend SpecFlow as a great tool for building your behaviour based tests, and I can recommend watching this video on pluralsight for a great overview of SpecFlow and a good explanation of why you might be better with end to end tests than having unit tests.

You'll also get a lot out of reading 'Working effectively with legacy code' and reading some of the links and comments here might be useful as well.

You'll notice that some of the comments linked above point out that you need to write unit tests, but often you need to refactor before you can write the tests *as the code isn't currently testable), but that this isn't safe without unit tests. Catch-22. Writing End-To-End tests can often get you out of this catch-22, at the expense of having a slow running test suite.

TopTalkedBooks posted at August 20, 2017

for automated software unit tests I would recommend google test. There is a very good q&a on this platform, which you can find here.

Additionally, there is CPPUnitLite, which is developed by the author of "Working Effectively with Legacy Code", Michael Feathers.

I used AutoIt Scripts for testing a MFC application just a little bit, but it was not that easy to maintain them properly and build an effective logging system for failed tests.

However, the unit tests depend heavily on the architecture of your program and the structure of your class - especially the dependencies to other components / classes. So if you already have an existing MFC application, which was not built with unit tests in mind, you probably have to refactor a lot of things. Therefore, I would recommend the mentioned book. You can also use the classic "Refactoring" by Martin Fowler.

TopTalkedBooks posted at August 20, 2017

This is a repeat answer from a similar question.

Typically, it is very difficult to retrofit an untested codebase to have unit tests. There will be a high degree of coupling and getting unit tests to run will be a bigger time sink than the returns you'll get. I recommend the following:

  • Get at least one copy of Working Effectively With Legacy Code by Michael Feathers and go through it together with people on the team. It deals with this exact issue.
  • Enforce a rigorous unit testing (preferably TDD) policy on all new code that gets written. This will ensure new code doesn't become legacy code and getting new code to be tested will drive refactoring of the old code for testability.
  • If you have the time (which you probably won't), write a few key focused integration tests over critical paths of your system. This is a good sanity check that the refactoring you're doing in step #2 isn't breaking core functionality.
TopTalkedBooks posted at August 20, 2017

Other hanselminutes episodes on testing:

Other podcasts:

Other questions like this:

Blog posts:

I know you didn't ask for books but... Can I also mention that Beck's TDD book is a must read, even though it may seem like a dated beginner book on first flick through (and Working Effectively with Legacy Code by Michael C. Feathers of course is the bible). Also, I'd append Martin(& Martin)'s Agile Principles, Patterns & Techniques as really helping in this regard. In this space (concise/distilled info on testing) also is the excellent Foundations of programming ebook. Goob books on testing I've read are The Art of Unit Testing and xUnit Test Patterns. The latter is an important antidote to the first as it is much more measured than Roy's book is very opinionated and offers a lot of unqualified 'facts' without properly going through the various options. Definitely recommend reading both books though. AOUT is very readable and gets you thinking, though it chooses specific [debatable] technologies; xUTP is in depth and neutral and really helps solidify your understanding. I read Pragmatic Unit Testing in C# with NUnit afterwards. It's good and balanced though slightly dated (it mentions RhinoMocks as a sidebar and doesnt mention Moq) - even if nothing is actually incorrect. An updated version of it would be a hands-down recommendation.

More recently I've re-read the Feathers book, which is timeless to a degree and covers important ground. However it's a more 'how, for 50 different wheres' in nature. It's definitely a must read though.

Most recently, I'm reading the excellent Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. I can't recommend it highly enough - it really ties everything together from big to small in terms of where TDD fits, and various levels of testing within a software architecture. While I'm throwing the kitchen sink in, Evans's DDD book is important too in terms of seeing the value of building things incrementally with maniacal refactoring in order to end up in a better place.

TopTalkedBooks posted at August 20, 2017

About testability

Due to the use of singletons and static classes MyViewModel isn't testable. Unit testing is about isolation. If you want to unit test some class (for example, MyViewModel) you need to be able to substitute its dependencies by test double (usually stub or mock). This ability comes only when you provide seams in your code. One of the best techniques used to provide seams is Dependency Injection. The best resource for learning DI is this book from Mark Seemann (Dependency Injection in .NET).

You can't easily substitute calls of static members. So if you use many static members then your design isn't perfect.

Of course, you can use unconstrained isolation framework such as Typemock Isolator, JustMock or Microsoft Fakes to fake static method calls but it costs money and it doesn't push you to better design. These frameworks are great for creating test harness for legacy code.

About design

  1. Constructor of MyViewModel is doing too much. Constructors should be simple.
  2. If the dependecy is null then constructor must throw ArgumentNullException but not silently log the error. Throwing an exception is a clear indication that your object isn't usable.

About testing framework

You can use any unit testing framework you like. Even MSTest, but personally I don't recommend it. NUnit and are MUCH better.

Further reading

  1. Mark Seeman - Dependency Injection in .NET
  2. Roy Osherove - The Art of Unit Testing (2nd Edition)
  3. Michael Feathers - Working Effectively with Legacy Code
  4. Gerard Meszaros - xUnit Test Patterns

Sample (using MvvmLight, NUnit and NSubstitute)

public class ViewModel : ViewModelBase
    public ViewModel(IMessenger messenger)
        if (messenger == null)
            throw new ArgumentNullException("messenger");

        MessengerInstance = messenger;

    public void SendMessage()

public static class Messages
    public static readonly string SomeMessage = "SomeMessage";

public class ViewModelTests
    private static ViewModel CreateViewModel(IMessenger messenger = null)
        return new ViewModel(messenger ?? Substitute.For<IMessenger>());

    public void Constructor_WithNullMessenger_ExpectedThrowsArgumentNullException()
        var exception = Assert.Throws<ArgumentNullException>(() => new ViewModel(null));
        Assert.AreEqual("messenger", exception.ParamName);

    public void SendMessage_ExpectedSendSomeMessageThroughMessenger()
        // Arrange
        var messengerMock = Substitute.For<IMessenger>();
        var viewModel = CreateViewModel(messengerMock);

        // Act

        // Assert
TopTalkedBooks posted at September 24, 2017

This may not be a nice experience. Consider reading "Working Effectively With Legacy Code" (

TopTalkedBooks posted at October 06, 2017
If you have to write mocks in the native language, mocks will probably drive you insane.

Tools like mockito can make a big difference.

I worked on a project which was terribly conceived, specified, and implemented. My boss said that they shouldn't even have started it and shouldn't have hired the guy who wrote it! Because it had tests, however, it was salvageable, and I was able to get it into production.

This book

makes the case that unit tests should always run quickly, not depend on external dependencies, etc.

I do think a fast test suite is important, but there are some kinds of slower tests that can have a transformative impact on development:

* I wrote a "super hammer" test that smokes out a concurrent system for race conditions. It took a minute to run, but after that, I always knew that a critical part of the system did not have races (or if they did, they were hard to find)

* I wrote a test suite for a lightweight ORM system in PHP that would do real database queries. When the app was broken by an upgrade to MySQL, I had it working again in 20 minutes. When I wanted to use the same framework with MS SQL Server, it took about as long to port it.

* For deployment it helps to have an automated "smoke test" that will make sure that the most common failure modes didn't happen.

That said, TDD is most successful when you are in control of the system. In writing GUI code often the main uncertainty I've seen is mistrust of the underlying platform (today that could be, "Does it work in Safari?")

When it comes to servers and stuff, there is the issue of "can you make a test reproducible". For instance you might be able to make a "database" or "schema" inside a database with a random name and do all your stuff there. Or maybe you can spin one up in the cloud, or use Docker or something like that. It doesn't matter exactly how you do it, but you don't want to be the guy who nukes the production database (or a another developer's or testers database) because the build process has integration tests that use the same connection info as them.

TopTalkedBooks posted at October 08, 2017

@gdbj good luck in any case, we've all been there! Strongly recommended reading: Working Effectively With Legacy code by Michael Feathers

TopTalkedBooks posted at November 25, 2017
You're definitely right that unit tests are a part of the solution.

can be read in a few different registers (making a case for what unit tests should be in a greenfield system, why and how to backfit unit tests into a legacy system) but it makes that case pretty strongly. It can seem overwhelming to get unit tests into a legacy system but the reward is large.

I remember working on a system that was absolutely awful but was salvageable because it had unit tests!

Also generally getting control of the build procedure is key to the scheduling issue -- I have seen many new project where a team of people work on something and think all of the parts are good to go, but you find there is another six months of integration work, installer engineering, and other things you need to do ship a product. Automation, documentation, simplification are all bits of the puzzle, but if you want agility, you need to know how to go from source code to a product, and not every team does.

TopTalkedBooks posted at January 28, 2018
I would try and clean up the bits I was working on.

This is a good book on the topic refactoring a large code base with no tests.

TopTalkedBooks posted at March 11, 2018

I'm not sure what you mean by sending events. I would recommend reading the book That book teaches you how to find the seams in your code where you can isolate behavior and just test what's important. The fact that your code is in private methods is good, but what about extracting those private methods into their own class and passing in all needed info in a public method that can be tested?

TopTalkedBooks posted at March 18, 2018

Indeed, there's code duplicated here, so it's a good idea to try to eliminate it.

You could do:

function askQuestion($data, $question){
  if($data !='') {
     echo "<span class='question'>$question</span><span class='answer'>".$data.'</span><br />';

and use it like:

foreach ( $answers as $a )   {
  askQuestion($a->q2, "Question title?");
  askQuestion($a->q3, "Question label 2?");
  // and so on

And, as a general rule of thumb: when you're about to refactore code, don't forget to put in place non regression tests first (because it would be too bad to break the code when we're trying to improve it).

Last advice: if you have to work with legacy codebases on a regular basis, you might to read Working Effectively with legacy code, which gives a lot of practical tips to turn such codebases into stuff easier to work with.

Top Books
We collected top books from hacker news, stack overflow, Reddit, which are recommended by amazing people.