Search results for "MATH"
03:47

MATH (MATH) has pumped 14.61% in the last 24 hours.

Gate News Bot news, August 4, according to CoinMarketCap data, MATH (MATH) is currently priced at $0.11, having risen 14.61% in the last 24 hours, with a high of $0.14 and a low of $0.10. The current market capitalization is approximately $13 million, which is an increase of nearly $1.65 million compared to yesterday. Math Wallet is a multi-chain cryptocurrency wallet that supports over 220 mainstream public chains. It offers various forms such as mobile applications, browser extensions, and web wallets, dedicated to providing convenient asset management and DApp access services for Web3 users. Important news about MATH recently: 1️⃣ **MATH Olympiad participants focus on blockchain AI trust issues** A mathematics Olympiad contestant is trying to solve problems in the field of blockchain and artificial intelligence.
More
MATH-4.48%
21:51

The valuation of Robinhood CEO's artificial intelligence math startup is nearly $900 million.

Golden Finance reports that Harmonic AI, an artificial intelligence startup founded by Robinhood CEO Vlad Tenev, has raised $100 million to address a problem that can sometimes plague AI models: mathematics. The Series B funding round values Harmonic at $875 million. Harmonic plans to open its flagship AI model, "Aristotle," to researchers and the public later this year.
More
B-8.79%
08:31

Cetus suffered an attack with a loss of $230 million, and $162 million of the stolen funds have been frozen.

Gate News bot message, based on the analysis report released by the Slow Mist team, the Cetus protocol has been attacked, and the attacker exploited a vulnerability in the overflow detection of the checked_shlw function to carry out the attack. The attacker first borrowed haSUI through a flash loan, and then exploited a system vulnerability to exchange a large amount of liquidity with only 1 token. This attack resulted in an estimated loss of about $230 million, involving various digital assets such as SUI, vSUI, and USDC. The attacker transferred part of the funds to the EVM address through the Sui Bridge cross-chain and deposited $10 million in assets to Suilend. Currently, the SUI Foundation has frozen $162 million in stolen funds. Cetus has completed a bug fix, and the SlowMist team recommends that developers rigorously verify math function boundary conditions. Source: Wu Says
More
01:04

Epoch AI predicts: The pace of inference models will slow down within the fastest year.

On May 14, Epoch AI, a non-profit AI research institution, released the latest report, pointing out that it is difficult for AI companies to continue to squeeze huge performance gains from inference models, and the progress of inference models will slow down within a year at the earliest. Based on publicly available data and assumptions, the report highlights the constraints of computing resources and the increase in research overhead. The AI industry has long relied on these models to improve benchmark performance, but this reliance is being challenged. Josh You, an analyst at the agency, pointed out that the rise of inference models stems from their excellent performance on specific tasks. OpenAI's O3 model, for example, has focused on improving math and programming skills in recent months. These inference models improve performance by adding computational resources to solve problems, but at the cost of requiring more computation to handle complex tasks, these inference models take longer than traditional models.
05:55

TRONNile testnet pre-release GreatVoyage-v4.8.0 (Kant) version

TRONNileTestnet has been upgraded to the pre-release version of GreatVoyage-v4.8.0 (Kant), with mandatory upgrades including multiple important updates and governance proposals, such as migrating Math operations to StrictMath, enhancing transaction verification, adding new TVM instructions, etc. Those who have set up TestnetNode need to upgrade as soon as possible. Proposals need to be voted on for effect, with Mainnet effects awaiting confirmation.
More
TRX-0.65%
MATH-4.48%
  • 1
22:09
Researchers from BTQ, a startup dedicated to developing blockchain technology that can resist quantum computer attacks, recently published a journal article proposing an alternative to the (PoW) algorithm involving quantum technology, called Coarse-Grained Boson Sampling ((CGBS)). This method uses light particles (bosons) to generate unique patterns (samples) that reflect the current state of the blockchain, rather than relying on math problems based on hash. It is reported that this method replaces the traditional PoW encryption challenge with quantum sampling tasks, significantly reducing energy consumption while ensuring the security and decentralization of the network.
08:03

The dark side of the moon releases mathematical reasoning models

On November 16th, according to Jin10 Data, on November 16th, Kimi, a subsidiary of the Dark Side of the Moon, released a new generation of mathematical reasoning model k0-math. Benchmark tests show that Kimi k0-math's mathematical ability can be benchmarked against OpenAI.
More
MATH-4.48%
07:29
Let's do this math! ⏬ #BTC is true scarcity.
MATH-4.48%
BTC-2.97%
  • 9
  • 1
00:23
PANews reported on April 12 that OpenAI said on the X platform that the new version of GPT-4 Turbo is now open to ChatGPT paying users. This version improves skills in writing, math, logical reasoning, and coding.
TURBO-10.96%
GPT2.27%
  • 3
14:30
According to market news, SBF's trial is underway, SBF lawyer Mukasey said he understands the victim's pain, but will appeal, SBF is different from Madoff, he does not want to personally cause pain to anyone in any way, SBF is not a ruthless financial serial killer, he is not predatory. He uses the math in his head to make decisions, not with malice in his heart. It's easy to get caught up in the old-fashioned tales of greedy liars...... SBF never spends money recklessly. He persevered to the end and it was very important for him that people were rewarded.
  • 1
14:30
PANews reported on March 28 that in the trial of SBF (Sam Bankman-Fried), Judge Kaplan said: "Thank you. Is there anyone else? No? Okay... Normally, I'll make a list of the various documents I've considered in my records. I'm going to deviate from the norm. As of 9 o'clock last night or this morning, I have received more than 1000 pages of documents. 451 of those pages are from the defendants, with the same number of documents provided by the U.S. government, as well as the Victim Impact Statement (VIS). Commenting on SBF's (Sam Bankman-Fried) trial, Mukasey said, "My team didn't go through the trial, but we studied the records. The government stated that James Nicholson had stolen from widows. Karl Greenwood was sentenced by Judge Ramos. "The Inner City Press reported on the Second Circuit Court of Appeals appeal against Greenwood's sealed documents. In defense of SBF (Sam Bankman-Fried), Mukasey noted: "Madoff stole from Holocaust survivors. But Sam is not such a person. He didn't want to cause pain to anyone personally in any way. Sam is not a ruthless financial serial killer. He is not predatory. He makes decisions based on mathematical calculations in his head, not malice in his heart. Mukasey noted of SBF (Sam Bankman-Fried): "He was a clumsy math nerd. He is passionate about vegetarianism. He has wisdom beyond the ordinary. He's a wonderful puzzle. His ability to parse words surpassed that of the Talmudic scholars. As a billionaire, he doesn't care about material possessions. ”
  • 1
00:46
Odaily Planet Daily News - Fair Math, a developer of fully homomorphic encryption privacy protection technology, announced the completion of a $1.4 million Pre-Seed funding round, led by gumi Cryptos Capital, Inception Capital, and Polymorphic Capital. (Finsmes)
MATH-4.48%
  • 1
08:03

Bitcoin market analysis

What about the total Get liquidated in the past two days? Let's do the math Hey, it's close to about $500 million Then there was another huge long get liquidated Will there be a counter-trend market in the future? I'm starting to be a little conservative after this wave Why do I see it that way? Because Every time there is a huge long Get liquidated Prices are all rebounding quickly But what about this wave of market that has been going down year after year There is no time for a fundamental rebound It's just going to be short now A market state that absolutely crushes the bulls What about Bitcoin ETFs They are very active Beyond platinum It became the largest ETF behind gold But the price of Bitcoin fell further So why exactly The reason is GBTC Let's take a look at this fund I really don't know if he's intentional or not His fund management fee is about 1.5% per annum This represents You hold Bitcoin Put it in this grayscale GBTC Basically, you have a net loss of 1.5% every year without going up That's compared to other Bitcoin ETFs This is charged It is more expensive than other charges Some of them have more than doubled That's it Bitcoin held by the Grayscale Fund It is sold in large quantities to the empty cup of this one exchange Every day, more than a billion go to smash the plate I also saw a message this morning They began liquidation around the end of September Liquidation so far Billions of dollars are being pounded for you to pick up Maybe 48,400,900 will be them
More
01:14
In the early morning of January 11, OpenAI announced the official launch of the GPT Store to ChatGPT Plus, development teams and enterprise users, and users have created more than 3 million custom versions of ChatGPT since the release of GPTs two months ago. The GPT Store brings together customized versions of ChatGPT created by users for a variety of tasks, such as a chatbot that can teach kids math, as well as programming tutors, book guides, and more. In addition, OpenAI has launched a new paid package, ChatGPT Team, for enterprise users with smaller teams: $25 per user per month when billed annually and $30 per month when billed monthly. Like Enterprise users, Team's plans come with data privacy features. It is reported that OpenAI has offered two paid plans for ChatGPT, one is ChatGPT Plus for individual users, and the other is ChatGPT Enterprise for large enterprises.
23:26
Golden Finance reported that on Wednesday (January 10), local time, artificial intelligence research company OpenAI launched the online store "GPT Store". Previously, due to personnel turmoil, the company postponed the launch of this feature. According to the press release, the GPT Store, which began rolling out to paying users, teams and business users on Wednesday, brings together customized versions of ChatGPT created by users for various tasks, such as chatbots that can teach children math, as well as programming tutors, book guides, and more. At the same time, OpenAI has also launched a new paid package called "ChatGPT Team" for enterprise users with smaller teams: $25 per user per month when billed annually and $30 per month when billed monthly. Like Enterprise users, Team's plans come with data privacy protection. (Titanium Media)
  • 2
09:41
According to a report by IT Home on December 15, Google's DeepMind recently announced a model training method called "FunSearch", which claims to be able to calculate a series of "complex problems involving mathematics and computer science" including "ceiling-level problems" and "boxing problems". It is reported that the FunSearch model training method mainly introduces an "evaluator (_uator)" system for the AI model, the AI model outputs a series of "creative problem-solving methods", and the "evaluator" is responsible for judging the solution methods output by the model, and after repeated iterations, it can train an AI model with stronger mathematical ability. Google DeepMind used the PaLM 2 model for testing, and the researchers set up a dedicated "code pool", fed a series of questions into the model in the form of code, and set up an evaluator process, after which the model would automatically pick questions from the code pool in each iteration, generate "creative new solutions", and submit them to the evaluator for evaluation, where the "best solution" would be re-added to the code pool to start another iteration.
01:44
PANews reported on November 23 that, according to Reuters, people familiar with the matter said that four days before OpenAI CEO Sam Altman was previously fired, several researchers sent a letter to the board of directors, warning that a powerful artificial intelligence discovery could threaten humanity. The previously unreported letter and the AI algorithm were a key development in the board's removal of Sam Altman, as the board fired Altman a day after the letter was sent, and the letter was one of many board grievances that led to Sam Altman's dismissal. Mira Murati, a longtime executive at the company, mentioned the project, called Q*, to employees on Wednesday and said a letter had been sent to the board ahead of this weekend's event, according to sources. OpenAI has made progress on Q* (pronounced Q-star), which some insiders believe could be a breakthrough for OpenAI in the field of superintelligence, also known as artificial general intelligence (AGI). OpenAI defines AGI as an artificial intelligence system that is smarter than humans. Given the huge computational resources, the new model is able to solve certain mathematical problems. Although Q*'s math score is only at the level of elementary school students, an excellent score on such a test makes the researchers very optimistic about its future success. In their letter to the board, the researchers pointed to the capabilities and potential dangers of AI, but did not specify the specific safety concerns mentioned in the letter. A day later, the board fired Altman. Researchers see mathematics as the frontier of generative AI development. Currently, generative AI excels at writing and language translation by statistically predicting the next word, and the answers to the same question can vary greatly. But conquering mathematical abilities means that AI will have stronger reasoning abilities similar to human intelligence. Unlike calculators, which can only solve a finite number of operations, AGI can be generalized, learned, and understood. Against this backdrop, Altman strives to make ChatGPT one of the fastest-growing software applications in history and has attracted the necessary investment and computing resources from Microsoft to get closer to superintelligence (AGI). Previously, it was reported yesterday that OpenAI announced that Sam Altman will return to OpenAI as CEO.
AGI-7.06%
  • 1
10:18
A new study from the Pew Research Center in the United States found that about one in five teenagers who have heard of ChatGPT admit to using this OpenAI technology to help them with their school assignments, as reported by IT House on November 21. The researchers found that the majority of teens are aware of ChatGPT, which inferred that 13% of all teens in the United States use this chatbot to complete their assignments. The team surveyed teens between the ages of 13 and 17 and found that older students were more likely to use ChatGPT for classwork. Pengcheng Shi, associate dean of the Department of Computing and Information Science at the Rochester Institute of Technology, said, "It's there, and you can't stop people from using it, so the question now is, how best to use it." The Pew Research Center found that most students thought it was okay to use ChatGPT to study a new topic, but it was not appropriate to use it to solve math problems. Few teens find it acceptable to use it to write essays, and about 20% of students admit that they feel unsure about the ethical issues of using the technology in all three situations.
SHI-1.62%
  • 1
02:31

Economist: The Fed is shrinking its balance sheet

(1) An economist active in an American forum expressed his views on the Fed's balance sheet reduction: The Fed is tightening financial conditions to curb the worst inflation in 40 years. The Fed is using two tools in this regard. As is customary, it is raising the policy rate (federal funds rate), the direct consequence of which is to raise interest rates on all types (mortgages, car loans, credit cards, etc.) and all loan tenors. So far, it has raised the target federal funds rate by more than 500 basis points and promised further rate hikes if economic data suggests it is necessary; (2) But the reader may not notice the fact that the Fed is also shrinking its balance sheet (also known as quantitative tightening). Recall that in 2020 and 2021, the Fed printed $5 trillion in new cash and bought Treasury and mortgage-backed securities at a rate of $90 billion per month. The Fed is now doing the opposite, cutting up to $95 billion a month from its balance sheet (symbolically siphoning cash out of financial markets) (3) So the Fed's very savvy followers will know about quantitative tightening, and the maximum rate of tightening is $95 billion per month. Those who have bothered to do the math can calculate that the balance sheet has shrunk by more than $1 trillion since its peak in April 2022. The Fed released the minutes of the Federal Open Market Committee (FOMC) meeting, Chair Powell's press conference, and FOMC members' economic and interest rate projections, but remained silent on balance sheet policy.
More
  • 1
01:12
According to a report by IT Home on November 7, OpenAI held its first developer conference today and launched a service called "GPTs" for its ChatGPT, allowing users to create "their own version of ChatGPT" according to specific needs. OpenAI says they introduced the features to give users "some control" over ChatGPT, such as businesses can create a special assistant that only employees can access, and parents can create a problem-solving tool that specializes in teaching their children how to solve tricky math problems. OpenAI also launched a preview version of GPT-4 Turbo, a model that supports 128k contexts, three times cheaper input than GPT-4, double the rate limit, an updated knowledge base to April 2023, in addition to adding JSON Mode and updating multiple function calling capabilities. GPT-4 is more powerful, cheaper, and cheaper to develop. In addition, OpenAI has introduced a new feature called JSON Mode, which ensures that it is easier for developers to have models call JSON and xml content, making it easier for models to return consistent output results for repeatable output, which can help control model behavior and write model unit test content. Yesterday, it was reported that the content of OpenAI's press conference was exposed: GPT-4 Turbo will be launched, with a context length of 128K.
02:10
According to Xin Zhiyuan, Tao Zhexuan, a math god who is keen on using GPT-4 and Copilot to do research recently, discovered a hidden bug in his paper with the help of AI. Some math enthusiast fans exclaimed in this post: This is amazing, and it's great to see the spread of AI proof assistants, laying a stronger foundation for the future of math research. Tao Zhexuan said, "It's completely possible. Maybe in the near future, we can build an AI layer on top of Lean. By describing the steps in the proof to the AI, the AI can use Lean to execute the proof, invoking a computer algebra package in the process. In June of this year, Tao Zhexuan predicted in a blog about the GPT-4 trial experience that in 2026, AI will be combined with search and symbolic math tools to become a trusted co-author in mathematical research. During this period, there have been people who have proved this, such as scholars from Caltech, NVIDIA, MIT, and other institutions, who have built a theorem prover based on open-source LLMs.
  • 1
02:10
According to Xin Zhiyuan, Tao Zhexuan, a math god who is keen on using GPT-4 and Copilot to do research recently, discovered a hidden bug in his paper with the help of AI. Some math enthusiast fans exclaimed in this post: This is amazing, and it's great to see the spread of AI proof assistants, laying a stronger foundation for the future of math research. Tao Zhexuan said, "It's completely possible. Maybe in the near future, we can build an AI layer on top of Lean. By describing the steps in the proof to the AI, the AI can use Lean to execute the proof, invoking a computer algebra package in the process. In June of this year, Tao Zhexuan predicted in a blog about the GPT-4 trial experience that in 2026, AI will be combined with search and symbolic math tools to become a trusted co-author in mathematical research. During this period, there have been people who have proved this, such as scholars from Caltech, NVIDIA, MIT, and other institutions, who have built a theorem prover based on open-source LLMs.
06:44

Experts have expressed concern about the illusion of AI in platforms such as ChatGPT

Experts have raised concerns about the hallucination phenomenon of AI programs such as ChatGPT on popular platforms. AI hallucinations are phenomena in which information appears to be correct, but is actually inaccurate. Platforms such as ChatGPT provide inaccurate information. For example, Duke University Library pointed to the problem of false citations, Forbes published a news story that ChatGPT fooled scientists with fake summarizations, and Fortune magazine cited a study showing that ChatGPT's accuracy in answering math questions dropped from 98% to just 2%. To the question of whether ChatGPT is fabricating information, misinterpreting data, or relying on misinformation, ChatGPT responds that its response is generated based on the patterns it trained on and entered as of the last update in September 2021. However, AI models may produce incorrect or inaccurate information if the input data or query is flawed, but this is not the same as hallucinations in the traditional sense.
More
  • 2
14:00
As reported by Webmaster's House on October 17, experts raised concerns about the hallucination phenomenon of AI programs such as ChatGPT on popular platforms. AI hallucinations are phenomena in which information appears to be correct, but is actually inaccurate. Platforms such as ChatGPT provide inaccurate information. For example, Duke University Library pointed to the problem of false citations, Forbes published a news story that ChatGPT fooled scientists with fake summarizations, and Fortune magazine cited a study showing that ChatGPT's accuracy in answering math questions dropped from 98% to just 2%. To the question of whether ChatGPT is fabricating information, misinterpreting data, or relying on misinformation, ChatGPT responds that its response is generated based on the patterns it trained on and entered as of the last update in September 2021. However, AI models may produce incorrect or inaccurate information if the input data or query is flawed, but this is not the same as hallucinations in the traditional sense.
  • 1
18:16
Golden Finance reports that SBF defense attorney Mark Cohen laid out his defense at Wednesday’s fraud trial, portraying the 31-year-old former billionaire as a “math nerd” who ignored the risks when building the FTX exchange. management, but he did not misappropriate client funds. Mark Cohen admitted in his opening statement that FTX borrowed money from Alameda, but said that SBF, who graduated from MIT in 2014 with a degree in physics, had reason to believe that the loans were allowed and supported by collateral. Cohen said no theft occurred and Sam did not deceive anyone. Sam had no intention of deceiving anyone. Sam's behavior was genuine. As startups grow rapidly, some key aspects of FTX’s business, such as risk management, are “overlooked”. Sam and his colleagues built the plane as they flew it. No one person, no CEO, could be everywhere and do everything. Golden Finance Note: Today is the second day of the SBF trial, which will continue until Saturday.
  • 2
11:56
PANews reported on September 25 that the British Broadcasting Corporation (BBC) will launch the documentary "Downfall of the Crypto King" by FTX founder Sam Bankman-Fried (SBF) early tomorrow morning. According to the profile, SBF was a shaggy-haired, nerdy math genius who set out to change the cryptocurrency world, but ended up becoming the biggest loser and facing the prospect of a long prison sentence. Through detailed research and interviews with his former colleagues, staff and insiders, the film examines numerous overlooked red flags, including his scruffy image, celebrity hype and pledges to donate billions to charity, Leaving the world to turn a blind eye to alleged crimes that have plunged cryptocurrencies into crisis.
05:12
According to a report by the Science and Technology Innovation Board Daily on September 24, Zhipu AI launched the mathematical model MathGLM to improve the mathematical reasoning capabilities of large language models. It can perform complex arithmetic operations and answer Chinese math problems without using calculator tools. Some of its performance exceeds that of GPT4 and ChatGPT. It has now been launched on the ModelScope community for the first time in the world.
  • 1
04:33
According to a report from Webmaster Home on September 21, the large mathematical computing model "Abel" developed by the Generative Artificial Intelligence Research Group (GAIR) of Shanghai Jiao Tong University has performed well in the field of mathematical reasoning and has ranked first among open source models on multiple lists. , and surpassed competitors of American AI companies. It is reported that the Abelian model not only achieved the best results of open source mathematical models on the GSM8 K and MATH authoritative evaluation sets, but also performed well on difficult mathematics competition problems, surpassing competitors, including the American AI giants OpenAI and Google. . Although the Abelian mathematical model performed well on the evaluated data sets, it still has limitations in overfitting, generalization, versatility, multilingualism, and advanced techniques, and needs further improvement and expansion in the future.
01:21
According to IT House news on September 7, a new study from Stanford University found that the capabilities of the popular generative AI chatbot ChatGPT fluctuated within a few months. Currently, ChatGPT has a free GPT-3.5 model and a paid GPT-4 version that is smarter and faster. Researchers found that GPT-4 was effective at solving math problems in March, identifying prime numbers with 97.6% accuracy. Three months later, its accuracy dropped to 2.4%. On the other hand, GPT-3.5 got better, improving from 7.4% accuracy to 86.8%. The researchers also noticed similar fluctuations in writing code and visual reasoning. The researchers believe that the results do not truly reflect the accuracy state of ChatGPT's performance, but instead show the unintended consequences of fine-tuning the model. Essentially, when modifying part of the model to improve one task, other tasks may suffer. Why this is so is difficult to determine, because no one knows how ChatGPT works, and its code is not open source.
Load More
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)