On June 19, Etherscan, the leading Ethereum block explorer and analytics platform, unveiled an innovative feature, ‘Code Reader.’ This unique tool leverages the power of artificial intelligence to extract and decode the source code from any specified contract address. Designed to work with user prompts, the Code Reader uses OpenAI’s extensive language model to generate insightful information about the source code of the contract in question. The platform provides instructions on its tutorial page for effectively utilizing this tool.
Utilizing Code Reader: Requirements and Guidelines
To take advantage of Code Reader, you need an active and valid OpenAI API Key coupled with an adequate allowance on OpenAI usage limits. Notably, the tool is designed with users’ privacy in mind. It doesn’t store your API keys, thus ensuring your credentials’ safety.
Code Reader is not just a tool; it’s a revolutionary gateway to unprecedented insights into the underlying code of contracts. Its potential use cases span from detailed AI-generated elucidation of contract codes, and procurement of comprehensive lists of Ethereum-linked smart contract functionalities to understanding the interactions between the foundational contract and decentralized applications.
Once the contract files are retrieved, you can select a specific source code file for detailed analysis. Moreover, you can modify the source code directly in the user interface before allowing the AI to analyze it, offering a new level of interactivity and flexibility.
AI Progress and Potential Challenges
During the AI renaissance, several experts have voiced concerns about the viability of existing AI models. A recent study by the Singapore-based venture capital firm Foresight Ventures asserted that the upcoming decade would witness “computing power resources” as the pivotal battlefield.
Despite the growing interest in training expansive AI models on decentralized distributed computing power networks, there exist substantial challenges. Complexities in data synchronization, network optimization, and data privacy and security are among the significant constraints, creating hurdles in current prototypes’ effective implementation.
The Reality of AI Model Training and its Implications
Foresight Ventures offered an eye-opening example illustrating the complexities of AI model training. The team pointed out that training a large model boasting 175 billion parameters would necessitate roughly 700 gigabytes using single-precision floating-point representation.
These parameters would need regular transmission and update across the computing nodes in a distributed training setup. Assuming a scenario involving 100 computing nodes, with each node required to update all parameters at every step, the model would demand 70 terabytes of data transmission per second. This volume significantly overshadows the capabilities of most existing networks.
The researchers concluded with a pertinent observation. In many instances, smaller AI models remain a more practical choice and should not be hastily discounted due to the FOMO (Fear Of Missing Out) tide favoring larger models. Their message serves as a cautious reminder to the tech community to strike a balance between ambition and feasibility in the rapidly evolving field of artificial intelligence.
Please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. FintechMode is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.