[syndicated profile] elementy_news_feed

Две новых работы заставляют пересмотреть наши представления о роли заразительного зевания у животных. В первой показано, что шимпанзе подхватывают зевание человекоподобного робота. Во второй доказано, что и у рыб есть заразительное зевание. И то, и другое указывает на эволюционную древность заразительного зевания и функций, которое оно выполняло и выполняет у разных животных.

[syndicated profile] register_security_feed

Posted by Lindsay Clark

6-in-10 success rate for single-step tasks

A new benchmark developed by academics shows that LLM-based AI agents perform below par on standard CRM tests and fail to understand the need for customer confidentiality.…

Ian Barwick: PgPedia Week, 2025-06-15

Jun. 16th, 2025 10:25 am
[syndicated profile] planetposgresql_feed
PostgreSQL links Blogs, podcasts, newsletters etc. Scaling Postgres 370 - New Scale Out Options (2025-06-15) Postgres Weekly Issue 603 (2025-06-12) Announcements PostgreSQL JDBC 42.7.7 Security update for CVE-2025-49146 (2025-06-13) powa-archivist 5.0.3 is out! (2025-06-11) pg_dumpbinary v2.20 released (2025-06-11) pgtt v4.3 released (2025-06-09) Pgpool-II 4.6.2 released. (2025-06-09) PostgreSQL JDBC 42.7.6 Released (2025-06-09) PGConf.EU 2025 Call for Presentations (2025-06-09) Others Contributions for the week of 2025-06-02 (Week 23) (2025-06-12) - Boriss Mejías PostgreSQL Person of the Week: Teresa Lopes (2025-06-09)

more...

[syndicated profile] register_security_feed

Posted by Richard Speed

But lose your code and it's gone for good

Updated  Windows 11 users in the European Economic Area will shortly receive a new Recall Export feature, allowing Recall snapshots to be shared with third-party apps and websites.…

[syndicated profile] ryb40_feed
Современные тренды в оформлении интерьеров все чаще возвращаются к натуральным и экологичным материалам
[syndicated profile] register_security_feed

Posted by Connor Jones

Student 'believed he could finish' software dev 'project alone and therefore that the rules did not apply to him'

A former GCHQ intern was jailed for seven-and-a-half years for stealing top-secret files during a year-long placement at the British intelligence agency.…

[syndicated profile] planetposgresql_feed

BLOBs In PostgreSQL

Previous Blog Post

I have already published a blog post about PostgreSQL blobs.

But due to someone posting to get help about another implementation on the PostgreSQL Chat Telegram group about a very unusual method to store blobs, I thought, that should now also be covered.

I did not cover that method, because it is one of the worst ideas to handle blobs inside PostgreSQL.

How It Started

Someone was migrating data from Oracle to PostgreSQL and had blobs exceeding the limit BYTEA of max 1 GB of data.
They used a method that he described as writing BLOBS to OIDs. Which is obviously not what he realy did, ad OID is a numerical data type.

What They Used

In fact they used lo. That is storing a blob in a table where the binary data is stored as TOAST in the file system where PostgreSQL is located and an OID points to the TOASTed data.

Handling By PostgreSQL

With this method it is possible to store bigger binary files than the 1 GB limit of BYTEA.

But that does also add a lot of overhead to handling the blob data. The client can only handle the complete blob data. All layers in between have to handle the data, the PostgreSQL instance to get the data by pointers from disk, using the memory to load the binary data, and client side for example ODBC or JDBC.

Deleting Lo BLOBs

Also deleting these binaray objects does have downsides, too. Deleting rows or truncating a table does not delete the blob data, it leaves orphaned blob data.

One has to take care of orphaned blob data, that is not referenced anymore in the table pg_largeobject_metadata have to be removed with another job, vacuumlo.
That causes additional disk traffic, of course.

Comparing To File Systems Deletions

Compare that to the easyness of getting a file location from the database and the content from disk. Including deleting files is much easier and does not impact the database perfomrance.

Backups

The large objects are also part of the database backups. They are blowing up your database backups.

Comparing To File Backups

You can restore a deleted or accidentally changed file just from a file backup.

The database would have to be restored to that point in time berore the change/deletion happened.
That couses database downtime and even data loss as all data after the change is not restored. All data is at the point in time of the restore.

Additional Information About Large Objects

More documentation about how to handle large objects.

Conclusion

My advice is still the same as in the previous blog post about BLOBs: Do not store them inside of a database, write them to a file server or even to S3 and only store links in a TEXT column inside the database.

[syndicated profile] register_security_feed

Posted by David Gordon

Getting employees on board can do more than prevent breaches; it can send profitability soaring

Sponsored Post  Here's a sobering reality: 95% of data breaches involve human error. So, why do most organizations still throw technology at a fundamentally human problem? It's like trying to fix a leaky roof by buying better buckets.…

[syndicated profile] elementy_news_feed

Две новых работы заставляют пересмотреть наши представления о роли заразительного зевания у животных. В первой показано, что шимпанзе подхватывают зевание человекоподобного робота. Во второй доказано, что и у рыб есть заразительное зевание. И то, и другое указывает на эволюционную древность заразительного зевания и функций, которое оно выполняло и выполняет у разных животных.

[syndicated profile] register_security_feed

Posted by Simon Sharwood

PLUS: APNIC completes re-org; India cuts costs for chipmakers; Infosys tax probe ends; and more

Asia In Brief  Australia’s Federal Police (AFP) last week announced charges against four suspects for alleged participation in a money-laundering scheme that involved a security company’s armored cash transport unit.…

[syndicated profile] register_security_feed

Posted by Brandon Vigliarolo

PLUS: Discord invite links may not be safe; Miscreants find new way to hide malicious JavaScript; and more!

Infosec In Brief  A pair of Congressional Democrats have demanded a review of the Common Vulnerabilities and Exposures (CVE) program amid uncertainties about continued US government funding for the scheme.…

[syndicated profile] dxdt_feed

Posted by Александр Венедюхин

Сколько лет Интернету – вопрос многогранный. Дмитрий Бурков пишет, что отсчитывать годы нужно с момента разработки BGP (и это довольно логично, так как BGP в современной Сети используется для обмена информацией о маршрутах доставки пакетов).

[syndicated profile] register_security_feed

Posted by Jessica Lyons

With Tehran’s military weakened, digital retaliation likely, experts tell The Reg

The current Israel–Iran military conflict is taking place in the era of hybrid war, where cyberattacks amplify and assist missiles and troops, and is being waged between two countries with very capable destructive cyber weapons.…

Friday Squid Blogging: Stubby Squid

Jun. 13th, 2025 09:02 pm
[syndicated profile] bruce_schneier_feed

Posted by Bruce Schneier

Video of the stubby squid (Rossia pacifica) from offshore Vancouver Island.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

[syndicated profile] dxdt_feed

Posted by Александр Венедюхин

На фото ниже, из коллекции Библиотеки Конгресса, – установка краеугольного камня первого здания библиотеки Конгресса, а конкретно – здания Томаса Джефферсона.

Image of building being built

Фото постановочное. Капитолий присутствует в качестве фона. Все работники в шляпах. Оно и понятно: это 1890 год, всё строго, а за нахождение на улице, – тем более – на стройплощадке, – без шляпы – могли если и не оштрафовать, то подвергнуть словесному порицанию. Ну, наверное. Не то что сейчас – если без каски.

Увеличенный фрагмент, на котором присутствует и метла, и мастерок:

Image of building being built

[syndicated profile] register_security_feed

Posted by Jessica Lyons

Some trace back to an outfit under US export controls for alleged PLA links

Both Apple's and Google's online stores offer free virtual private network (VPN) apps owned by Chinese companies, according to researchers at the Tech Transparency Project, and they don't make this fact readily known to people downloading the apps.…

GoDaddy loses .co to Team Internet

Jun. 13th, 2025 04:18 pm
[syndicated profile] domainincite_feed

Posted by Kevin Murphy

Team Internet is to take over back-end duties for .co, after agreeing to take less than half as much as GoDaddy was charging. The London-based company has teamed up on a joint venture, Equipo PuntoCo, with Panama-based registrar CCI REG to sign a 10-year deal with Colombia’s communications ministry, MINTIC. The handover will put an […]

The post GoDaddy loses .co to Team Internet first appeared on Domain Incite.

[syndicated profile] googlesecurity_feed

Posted by Kimberly Samra


With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections. Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions. As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.


At Google, our teams have a longstanding precedent of investing in a defense-in-depth strategy, including robust evaluation, threat analysis, AI security best practices, AI red-teaming, adversarial training, and model hardening for generative AI tools. This approach enables safer adoption of Gemini in Google Workspace and the Gemini app (we refer to both in this blog as “Gemini” for simplicity). Below we describe our prompt injection mitigation product strategy based on extensive research, development, and deployment of improved security mitigations.


A layered security approach

Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker. This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources. 


Our model training with adversarial data significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models (technical details). This inherent model resilience is augmented with additional defenses that we built directly into Gemini, including: 


  1. Prompt injection content classifiers

  2. Security thought reinforcement

  3. Markdown sanitization and suspicious URL redaction

  4. User confirmation framework

  5. End-user security mitigation notifications


This layered approach to our security strategy strengthens the overall security framework for Gemini – throughout the prompt lifecycle and across diverse attack techniques.


1. Prompt injection content classifiers


Through collaboration with leading AI security researchers via Google's AI Vulnerability Reward Program (VRP), we've curated one of the world’s most advanced catalogs of generative AI vulnerabilities and adversarial data. Utilizing this resource, we built and are in the process of rolling out proprietary machine learning models that can detect malicious prompts and instructions within various formats, such as emails and files, drawing from real-world examples. Consequently, when users query Workspace data with Gemini, the content classifiers filter out harmful data containing malicious instructions, helping to ensure a secure end-to-end user experience by retaining only safe content. For example, if a user receives an email in Gmail that includes malicious instructions, our content classifiers help to detect and disregard malicious instructions, then generate a safe response for the user. This is in addition to built-in defenses in Gmail that automatically block more than 99.9% of spam, phishing attempts, and malware.


A diagram of Gemini’s actions based on the detection of the malicious instructions by content classifiers.


2. Security thought reinforcement


This technique adds targeted security instructions surrounding the prompt content to remind the large language model (LLM) to perform the user-directed task and ignore any adversarial instructions that could be present in the content. With this approach, we steer the LLM to stay focused on the task and ignore harmful or malicious requests added by a threat actor to execute indirect prompt injection attacks.

A diagram of Gemini’s actions based on additional protection provided by the security thought reinforcement technique. 


3. Markdown sanitization and suspicious URL redaction 


Our markdown sanitizer identifies external image URLs and will not render them, making the “EchoLeak” 0-click image rendering exfiltration vulnerability not applicable to Gemini. From there, a key protection against prompt injection and data exfiltration attacks occurs at the URL level. With external data containing dynamic URLs, users may encounter unknown risks as these URLs may be designed for indirect prompt injections and data exfiltration attacks. Malicious instructions executed on a user's behalf may also generate harmful URLs. With Gemini, our defense system includes suspicious URL detection based on Google Safe Browsing to differentiate between safe and unsafe links, providing a secure experience by helping to prevent URL-based attacks. For example, if a document contains malicious URLs and a user is summarizing the content with Gemini, the suspicious URLs will be redacted in Gemini’s response. 


Gemini in Gmail provides a summary of an email thread. In the summary, there is an unsafe URL. That URL is redacted in the response and is replaced with the text “suspicious link removed”. 


4. User confirmation framework


Gemini also features a contextual user confirmation system. This framework enables Gemini to require user confirmation for certain actions, also known as “Human-In-The-Loop” (HITL), using these responses to bolster security and streamline the user experience. For example, potentially risky operations like deleting a calendar event may trigger an explicit user confirmation request, thereby helping to prevent undetected or immediate execution of the operation.


The Gemini app with instructions to delete all events on Saturday. Gemini responds with the events found on Google Calendar and asks the user to confirm this action.


5. End-user security mitigation notifications


A key aspect to keeping our users safe is sharing details on attacks that we’ve stopped so users can watch out for similar attacks in the future. To that end, when security issues are mitigated with our built-in defenses, end users are provided with contextual information allowing them to learn more via dedicated help center articles. For example, if Gemini summarizes a file containing malicious instructions and one of Google’s prompt injection defenses mitigates the situation, a security notification with a “Learn more” link will be displayed for the user. Users are encouraged to become more familiar with our prompt injection defenses by reading the Help Center article


Gemini in Docs with instructions to provide a summary of a file. Suspicious content was detected and a response was not provided. There is a yellow security notification banner for the user and a statement that Gemini’s response has been removed, with a “Learn more” link to a relevant Help Center article.

Moving forward


Our comprehensive prompt injection security strategy strengthens the overall security framework for Gemini. Beyond the techniques described above, it also involves rigorous testing through manual and automated red teams, generative AI security BugSWAT events, strong security standards like our Secure AI Framework (SAIF), and partnerships with both external researchers via the Google AI Vulnerability Reward Program (VRP) and industry peers via the Coalition for Secure AI (CoSAI). Our commitment to trust includes collaboration with the security community to responsibly disclose AI security vulnerabilities, share our latest threat intelligence on ways we see bad actors trying to leverage AI, and offering insights into our work to build stronger prompt injection defenses. 


Working closely with industry partners is crucial to building stronger protections for all of our users. To that end, we’re fortunate to have strong collaborative partnerships with numerous researchers, such as Ben Nassi (Confidentiality), Stav Cohen (Technion), and Or Yair (SafeBreach), as well as other AI Security researchers participating in our BugSWAT events and AI VRP program. We appreciate the work of these researchers and others in the community to help us red team and refine our defenses.


We continue working to make upcoming Gemini models inherently more resilient and add additional prompt injection defenses directly into Gemini later this year. To learn more about Google’s progress and research on generative AI threat actors, attack techniques, and vulnerabilities, take a look at the following resources:


Profile

beldmit: (Default)
Dmitry Belyavskiy

May 2025

S M T W T F S
    123
45678910
11121314151617
181920212223 24
25262728293031

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 17th, 2025 03:07 pm
Powered by Dreamwidth Studios