How businesses update their payment options is interesting. Amazon takes the new card and is ready to go. Comcast takes over a month before the new card is fully processed.
It is driven by business requirements.
The core of the Comcast billing system is likely a mainframe-based batch system with elements that might have been developed as early as the 1960's, but more likely at least ten or so years later. These types of systems still live on in utilities (phone, gas, electric), insurance applications, and others. They have usually been updated around the edges (for example, now you enter your CC or checking account information into a web application rather than filling out a form or talking to a customer service agent, but the information just gets forwarded into the old batch systems, and you get nice statements printed on crisp laser printers rather than on impact printers with worn out ribbons). These updates are usually ongoing additions rather than complete replacements. As such, these application systems are enormously complicated, with rarely one person or department completely knowing how the whole thing works start to finish. They can be difficult to work with.
A lot of people ask why these companies do not just replace these antiquated systems with some newer technology. The answer is that it is usually not worth the cost or the risk. How much more cable, phone, and internet could Comcast sell if the CC updates were immediately effective, rather than taking a whole billing cycle? Probably not a whole lot.
Amazon, on the other hand, came along in a time when newer technologies were already available. Imagine Amazon on an older batch-based system. Nobody would buy anything if a transaction took a month to complete.
I'll diverge a little here, so quit reading if you have no interest.
IBM's first and second generations of mainframes (which were considered the first feasible general purpose computers for widespread use) were designed around whatever hardware the engineers could produce reliably and at an acceptable cost to the customer. The problem was that all software had to be rewritten when going to a new generation. This impeded sales. The 3rd generation (the S/360) was yet another new design, but this time IBM designed a computer architecture (OS/360) to which the hardware platforms had to conform. This enabled IBM to make a range of computers, all conforming to this architecture, and this allowed customers to update and upgrade their hardware without having to rewrite a lot of programming. The promise was that application code written for OS/360 would be upward compatible with future versions of this architecture. With rare exception, this promise has been kept even until now. While there have been many revisions and additions to the original OS/360, most application code written for OS/360 (with some exceptions) will work properly with current operating systems and current hardware. The major advantage of this is that code that conformed to this architecture in the 1960's and beyond (actually 1963 or 1964, I believe) will still function today. The major disadvantage is that code that conformed to this architecture in the 1960's and beyond will still function today, and this is one of the reasons that some of these older systems do not die.