This is the reason why it is important to plan Disaster recovery and also plan Multi-Cloud architectures.
Our applications and databases must have ultra high availability. It can be achieved with applications and data platforms hosted on different regions for failover.
Critical businesses should also plan for replication across multiple cloud platforms.
You may use some of the existing solutions out there that can help with such implementations for data platforms.
- Qlik replicate
- HexaRocket
and some more.
Or rather implement native replication solutions available with data platforms.
I am working on the world's first end-to-end Database Migration tool, supporting Oracle to PostgreSQL and MSSQL to PostgreSQL database migrations with AI for Schema Migrations.
Until now, people used different tools for Schema Migration and Data Migration/Replication.
During this process, we ended up building a data migration and replication tool supporting any databases between Oracle, SQL Server (MSSQL) and PostgreSQL databases.
For some applications it might be of great use but for a vast and complex applications architecture, the libyear metric might only oversimplify the complexity of dependency management,compatibility issues, updates and security patches, etc
I noticed that it focuses only on the age of dependencies without considering other factors like the how critical is the update, and how stable it is, and the improvements in newer versions, etc.
Right, but that means it's not a decent starting measure.
To me, "decent starting measure if there is no appetite for something more in-depth" sounds like "just drop it in, it's enough to get started, we'll figure out the rest later", but that could be temporarily harmful.
Exercising judgment is the opposite of that, no? Then you're going into depth.
At a previous company we had this big web-app (filterable) matrix which listed each project dependency, but the neat thing was that you can tag dependencies to add weight and importance.
Initially i thought it would need to be more complex, but but it was more than enough.
Relying on external APIs or databases within activities might lead to variability in workflow execution.
Also, on handling HTTP errors in activities by raising an "ApplicationError" based on the status code, might simplifies error handling but might need to see how it accounts for more complex scenarios where errors are transient or where a retry could be successful even for some client errors like rate limiting or temporary unavailability etc.
As the asyncio library itself does have a steep learning curve, integration of asyncio with workflow systems like Temporal that also uses Pythons native asynchronous features, developers should be careful about indirect or subtle bugs, especially in error handling and task management.
> Relying on external APIs or databases within activities might lead to variability in workflow execution.
This is why they are activities. Their results are stored in history, the workflow remains deterministic.
> might need to see how it accounts for more complex scenarios where errors are transient or where a retry could be successful even for some client errors like rate limiting or temporary unavailability etc.
Temporal allows you to specify whether an error is retryable or not.
It is probably an observation and a forecast at the right time.
I remember my days at one of the Top Home and Enterprise PC manufacturing companies over 15 years ago, when there was criticism around Smart phones.
People laughed assuming that a smart phone is of no use and people prefer a PC or a laptop. Everything else is history.
What is important at all times is the timing and Identifying something that can change the world at the right time.
This is where the Top Leadership roles come into play.
Identify the gaps and introduce the immediate action plan to make the best of the best.
Sure and not to discredit your observation, but what other observations have you made in the last 5 years that didn’t pan out? Regarding politics, sports, stock market, covid, or other tech trends? The evaluation can’t be looking back, and if you’re right about 1/10 things, would that warrant a $1b investment in each?
The thing is that we had small notebooks/agenda/notepad, then we moved on to PDAs when things turn digital, it's not a big stretch to imagine how smart phones should work if the hardware is there (what I read is that manufacturers was cheaping out and locking things down). I still believe that what LLM does best is analyzing natural languages and producing coherent (not necessarily true) output. And there are maybe business needs for that, but still have not seen a truly individual tool, like personal computing is (you can go on a desert island with a laptop and compute). I agree that executives is to predict and plan strategical responses. But so far, it seems to be only useful for those that like quick answers, even if it maybe untrue.
I remember reading about Xerox people forecasting the smartphones and tablets. We have a picture of their brainstorming and they used small pages to model a handheld computer.
Science fiction was and still is full of more and more integrated computer with human.
Technology is driven by imagination more often than not.