Zanimljivo za objavu
_________
Ksenija Kostić
Marketing
<http://www.pcpress.rs/> www.pcpress.rs
PC Press | Osmana Đikića 4 | 11000 Beograd | Srbija
Tel: +381 11 2080-220 | Mob: +381 63 125 00 26
From: Nada Puresevic [mailto:Nada.Puresevic@grayling.com]
Sent: 09 December 2020 10:26
Subject: Srbija: Najpretraživaniji pojmovi na Google pretraživaču u 2020.
godini
Poštovani,
Kompanija Google danas je objavila svoju godišnju listu najpretraživanijih
pojmova u 2020. godini, nudeći jedinstven uvid u najznačajnije momente i
trendove u Srbiji.
Opširnije saopštenje sa listama možete pronaći u prilogu.
Srdačan pozdrav,
Nada
Nada Purešević
Account Manager
-
Takovska 6
11000 Belgrade, Serbia
T +381 (0)11 3234 198 | M +381 (0) 64 6428038
nada.puresevic(a)grayling.com <mailto:nada.puresevic@grayling.com>
<https://twitter.com/graylingpr> @GraylingPR
<http://www.grayling.com/> grayling.com
Grayling is part of Huntsworth. Registered number: 3140273. Registered
office: 8th Floor, Holborn Gate, 26 Southampton Buildings, London, England,
WC2A 1AN, UK
Ako želimo ovo da objavimo….čini mi se da bi trebalo :)
_________
Ksenija Kostić
Marketing
<http://www.pcpress.rs/> www.pcpress.rs
PC Press | Osmana Đikića 4 | 11000 Beograd | Srbija
Tel: +381 11 2080-220 | Mob: +381 63 125 00 26
From: Anja Mihaljevic [mailto:anja.mihaljevic@404.agency]
Sent: 08 December 2020 13:37
To: Anja Mihaljevic <anja.mihaljevic(a)404.agency>
Subject: Saopštenje za medije_Huawei potpisao ugovor sa Kancelarijom za IT i eUpravu
Drage kolege,
kompanija Huawei i Kancelarija za IT i eUpravu potpisali su Ugovor o smeštanju opreme kojim ova telekomunikaciona kompanija postaje još jedan komercijalni korisnik Državnog data centra u Kragujevcu.
Ugovor su potpisali generalni direktor kompanije Huawei u Srbiji, gospodin Chen Chen i direktor Kancelarije za IT i eUpravu, prof. dr Mihailo Jovanović.
Više informacija možete pronaći u saopštenju u prilogu, kao i fotografije.
Potpis za fotografije: Arhiva Huawei.
Srdačno,
Anja Mihaljević
+385 99 2617179
> Hoćemo pisati o ovome? Ima li još detalja? Šta im se desilo?
Evo šta se desilo. Možda bi vredelo nešto napisati, od malih stvari počinju
veliki problemi :) Prosleđujem i na webteam.
https://www.zdnet.com/article/amazon-heres-what-caused-major-aws-outage-last
-week-apologies/
Amazon: Here's what caused the major AWS outage last week
AWS explains how adding a small amount of capacity to Kinesis servers
knocked out dozens of services for hours.
Liam TungNovember 30, 2020 -- 11:23 GMT (03:23 PST)
Amazon Web Services (AWS) has explained the cause of
<https://www.zdnet.com/article/aws-outage-impacts-thousands-of-online-servic
es/> last Wednesday's widespread outage, which impacted thousands of
third-party online services for several hours.
While dozens of AWS services were affected, AWS says the outage occurred in
its Northern Virginia, US-East-1, region. It happened after a "small
addition of capacity" to its front-end fleet of Kinesis servers.
Kinesis is used by developers, as well as other AWS services like CloudWatch
and Cognito authentication, to capture data and video streams and run them
through AWS machine-learning platforms.
The Kinesis service's front-end handles authentication, throttling, and
distributes workloads to its back-end "workhorse" cluster via a database
mechanism called sharding.
<https://aws.amazon.com/message/11201/> As AWS notes in a lengthy summary
of the outage, the addition of capacity triggered the outage but wasn't the
root cause of it. AWS was adding capacity for an hour after 2:44am PST, and
after that all the servers in Kinesis front-end fleet began to exceed the
maximum number of threads allowed by its current operating system
configuration.
The first alarm was triggered at 5:15am PST and AWS engineers spent the next
five hours trying to resolve the issue. Kinesis was fully restored at
10:23pm PST.
Amazon explains how the front-end servers distribute data across its Kinesis
back-end: "Each server in the front-end fleet maintains a cache of
information, including membership details and shard ownership for the
back-end clusters, called a shard-map."
According to AWS, that information is obtained through calls to a
microservice vending the membership information, retrieval of configuration
information from DynamoDB, and continuous processing of messages from other
Kinesis front-end servers.
"For [Kinesis] communication, each front-end server creates operating system
threads for each of the other servers in the front-end fleet. Upon any
addition of capacity, the servers that are already operating members of the
fleet will learn of new servers joining and establish the appropriate
threads. It takes up to an hour for any existing front-end fleet member to
learn of new participants."
As the number of threads exceeded the OS configuration, the front-end
servers ended up with "useless shard-maps" and were unable to route requests
to Kinesis back-end clusters. AWS had already rolled back the additional
capacity that triggered the event but had reservations about boosting the
thread limit in case it delayed the recovery.
As a first step, AWS has moved to larger CPU and memory servers, as well as
reduced the total number of servers and threads required by each server to
communicate across the fleet.
It's also testing an increase in thread count limits in its operating system
configuration and working to "radically improve the cold-start time for the
front-end fleet".
CloudWatch and other large AWS services will move to a separate, partitioned
front-end fleet. It's also working on a broader project to isolate failures
in one service from affecting other services.
AWS has also acknowledged the delays in updating its Service Health
Dashboard during the incident, but says that was because the tool its
support engineers use to update the public dashboard was affected by the
outage. During that time it was updating customers via the Personal Health
Dashboard.
"With an event such as this one, we typically post to the Service Health
Dashboard. During the early part of this event, we were unable to update the
Service Health Dashboard because the tool we use to post these updates
itself uses Cognito, which was impacted by this event," AWS said.
"We want to apologize for the impact this event caused for our customers."
From: Vesna Čarknajev <vesna(a)pcpress.rs>
Sent: Thursday, November 26, 2020 11:12 AM
To: 'Vazne i pri tom zabavne stvari' <fun(a)pcpress.info>
Subject: Re: [Fun] Šta je katastrofa
Hoćemo pisati o ovome?
Ima li još detalja? Šta im se desilo?
Vesna Čarknajev
CEO
PC Press | Osmana Đikića 4 | 11000 Beograd | Srbija
Tel: +381 11 2765-533 | Mob: +381 63 234-801
E-mail: <mailto:vesna@pcpress.rs> vesna(a)pcpress.rs
From: Fun [mailto:fun-bounces@pcpress.info] On Behalf Of Dejan Ristanovic
Sent: Thursday, November 26, 2020 1:18 AM
To: Vazne i pri tom zabavne stvari <fun(a)pcpress.info
<mailto:fun@pcpress.info> >
Subject: [Fun] Šta je katastrofa
Manite Coronu i gluposti, OVO je katastrofa ;>
https://www.theverge.com/2020/11/25/21719396/amazon-web-services-aws-outage-
down-internet
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=emailclient>
Virus-free.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai
gn=sig-email&utm_content=emailclient> www.avast.com