Worldwide, digital health technology are now a significant component of health care systems. Rapid advancements in processing power and wireless technology, along with growing interest in the use of AI and machine learning in healthcare, are the reasons for this surge in popularity. Patients, healthcare providers, managers of health systems, and data services can all benefit from digital health technologies. They include the application of software, apps, programs, and artificial intelligence (AI) to particular processes or therapeutic goals as well as public health interventions. They can be utilized alone or in conjunction with other goods like diagnostic tests and medical equipment. Systems that support patient administration and operations also make use of other significant digital health technologies. An Strategy That Is More Proactive A few years ago, many nations lacked clear regulations pertaining to market access, safety, and quality, as well as strategies for digital health. How to facilitate the widespread adoption and deployment of digital health technology was not covered by health policies. Moreover, decision-making about digital health technology has a tendency to occur in silos, with privacy authorities considering data privacy issues separately from health authorities that prioritize efficacy, safety, and quality. Although there is still a problem with fragmented decision-making, the landscape of decision-making has changed due to rising interest in the relationship between AI and health care, which may help with this. First, a lot of nations are proactively addressing AI while simultaneously implementing a government-wide strategy that unites important institutions and stakeholders. Second, the idea of regulating AI-based medical solutions is gaining traction. Third, the WHO and the Organization for Economic Cooperation and Development are actively influencing this policy discussion on a global scale, providing a forum for decision-makers to convene. What Must Take Place? There is an increased focus on research and policy in this sector across national and international borders due to the launch of COVID-19 and the ongoing rise in both the supply and demand for digital health technology. Despite all of AI's potential benefits for the medical field, the essential question of whether technology is a positive thing still needs to be answered. In order to support knowledge and data regarding the ethical use of AI, any attempt to respond to this topic must be grounded in the following principles. First and foremost, legislators need to be aware of the risks and features of AI solutions. Low risk, for instance, might be associated with straightforward monitoring, but very high risk might be associated with diagnosing and making clinical judgments. Second, nations ought to establish evidence criteria for artificial intelligence solutions. Methods for assessing and evaluating the costs and advantages of medications and medical technologies are provided by health care economic evaluation. Health technology assessment organizations in several nations utilize these techniques to help guide decisions about the usage and uptake of technologies. For instance, the National Institute for Health and Care Excellence, the UK's health technology evaluation organization, modified its Evidence Standards Framework to incorporate the evidence needs for AI solutions. Third, thorough research on AI is required. To enhance the quality and standardize AI-related research outputs, recently worked on how to set criteria in economic evaluations of AI. Fourth, AI in healthcare is a rapidly evolving field that needs methods for testing applications and solutions. There are a lot of advantages to collaboration in this regard. Possible courses of action include hearing out and discussing public concerns, reporting and tracking AI performance publicly, establishing guidelines for data control, encouraging and supervising adherence to responsible AI principles, and keeping an eye on applications and solutions after they are put on the market. Lastly, international cooperation will support domestic initiatives. This could entail putting more of an emphasis on operationalizing rules and ethics guidelines that eliminate pointless obstacles to ethical AI use while guaranteeing that suitable risk assessment systems, control mechanisms, and supervision are in place. The more market activity there is, the more we will learn from monitoring the development of AI policy. International forums provide a venue for cooperative problem solving, coordination to reduce barriers, and exchange of collective learning to find policy answers. AI is now the use case for continuing global health collaboration and education. In fact, this highlights a concept that was first proposed over twenty years ago by the National Academy of Medicine as a continuous learning model: learning health systems. This is a strategy that is relevant to AI in health and is more urgent than ever.