




版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領
文檔簡介
March2024ExpertInsightsPERSPECTIVEONATIMELYPOLICYISSUEBILVACHANDRAAnalyzingHarmsfromAI-GeneratedImagesandSafeguardingOnlineAuthenticityThedemocratizationofimage-generatingartificialintelligence(AI)toolswithoutregulatoryguardrailshasamplifiedpreexistingharmsontheinternet.TheemergenceofAIimagesontheinternetbeganwithgenera-tiveadversarialnetworks(GANs),whichareneuralnetworkscontaining1(1)ageneratoralgorithmthatcreatesanimageand(2)adiscriminatoralgorithmtoassesstheimage’squalityand/oraccuracy.Throughseveralcollaborativeroundsbetweenthegeneratoranddiscriminator,afinalAIimageisgenerated(Alqahtani,Kavakli-Thorne,andKumar,2021).ThisPersonDoesNotE,asitecreatedbyanUberengineerthatgeneratesGANimagesofrealisticpeople,launchedinFebru-ary2019toawestruckaudiences(Paez,2019),withseriousimplicationsforexploita-tioninsuchareasofabuseaswidespreadscamsandsocialengineering.ThiswasjustthebeginningforAI-generatedimagesandtheirexploitationontheinternet.Overtime,AIimagegenerationadvancedawayfromGANsandtowarddiffusionmodels,whichproducehigher-qualityimagesandmoreimagevarietythanGANs.DiffusionmodelsworkbyaddingGaussiannoise2tooriginaltrainingdataimagesthroughaforwarddiffusionprocessandthen,throughareverseprocess,slowlyremovingthenoiseandresynthesiz-ingtheimagetorevealanew,cleangeneratedimage(Ho,Jain,andAbbeel,2020).Diffusionmodelsarepairedwithneuralnetworktechniquestomaptext-to-imagecapabili-ties,knownastext-imageencoders(ContrastiveLanguage-ImagePre-training[CLIP]wasamilestoneinthisspace),toallowmodelstoprocessvisualconcepts(Kim,Kwon,andChulYe,2022).Thus,thecommercializationofdif-fusionmodels(DALL-E,StableDiffusion,Midjourney,Imagen,andothers)putthepowerofsyntheticimagegen-erationinthehandsoftheuseronaglobalscale.Theriseofimagegenerationtoolshasintroducedsyntheticformsofsuchsafetyharmsasmis-anddisinformation,extremism,andnonconsensualintimateimagery(NCII),causingfur-therdisarrayanddamageintheinternetecosystem.ThesocietalharmsfromAIimagegenerationtoolshaveyettobeeffectivelyaddressedfromaregulatorystandpointbecauseofanexusofpolicychallenges,suchascopyrightprotection,dataprivacy,ethics,andcontrac-tualrequirements.RecentfearsrisingfromgenerativeAIhavesomewhatmovedthepolicyneedleonAIregula-tion,sparkinggreatinterestinCongressandtheexecutivebranch(asevidencedbytheSenateAIinsightforumsandPresidentJoeBiden’sexecutiveorderonAI,respectively[WhiteHouse,2023b]).However,withoutlegislationaddressingsafetyandsocietalissuesrelatedtogenerativeAI,executingacoherentregulatorystrategytoaddresstheharmfuleffectsofAI-generatedimagesontheinternetisatallorder.(Asofthiswriting,Section230oftheCom-municationsDecencyActof1996servesasthesolepieceoflegislationforinternetregulation[U.S.Code,Title47,Section230].)Inthispaper,IdelveintosafetyharmsandchallengesfromAI-generatedimagesandhowsuchimagesaffectauthenticityontheinternet.Thefirstsectionoutlinestheroleofimageauthenticityontheinternet.Inthesecondsec-tion,Ireviewthetechnicalsafetychallengesandharmsfortheimagegenerationspace,thenlookatindustrysolutionstoauthenticity,includingthepromiseofprovenancesolu-tionsandissueswithimplementingthem.Thethirdsectionoutlinesseveralpolicyconsiderationstotacklethisnewpar-adigmthatlargelyfocusonprovenance,givenitspromiseasanauthenticitysolutionandrelevanceinpolicyconversa-tions.Inthispaper,contentauthenticityinthecontextofimagesreferstoestablishingtransparentinformationaboutimages(bothhumanandAI-generated),whetherinorigin,context,authorship,orotherareas,inawaythatisaccessi-bletousersontheinternet.Throughout,thispaperfocusesAbbreviationsAIartificialintelligenceC2PACoalitionforContentProvenanceandAuthenticityCAICIDContentAuthenticityInitiativecivilinvestigativedemandCLIPCSAMFTCGANGIFCTNCIIContrastiveLanguage-ImagePre-trainingchildsexualabusematerialFederalTradeCommissiongenerativeadversarialnetworkGlobalInternetForumtoCounterTerrorismnonconsensualintimateimageryNationalInstituteofStandardsandTechnologyNISTOECDOrganisationforEconomicCo-operationandDevelopment2oncontentauthenticity,asitcouldplayakeyroleinshap-ingpublictrustinimagecontentbroadlyandcoversawideswathofissues,fromdisinformationtosyntheticNCII.TheissueisnotonlyadecreaseincontentTheRoleofImageAuthenticityontheInternetauthenticitybutalsoalackofknowledgeandtoolsamongmanyuserstohelpnavigatethisparadigmshiftintheinformationdomain.Imageshaveplayedanimportantroleinthehistoryoftheinternet,informingpeopleaboutcurrentevents,sparkingemotionalresponsestowaratrocitiesandinjustice,gal-vanizingindividualstosupportacause,andmuchmore.Researchshowsthatpeopletendtorespondmoreviscer-allytoimagesthantheydototextonline(MedillSchoolofJournalism,2019).Studiessuggestthatthebrainprocessesvisualstimulimorerapidlythanitdoeswords(AlpuimandEhrenberg,2023).Furthermore,imagescanincreaseaviewer’sperceptionofthetruthfulnessofanaccompanyingstatement(AlpuimandEhrenberg,2023).However,theeraoftreatingimagesas“proof”fromasocialstandpointisrapidlychanging.(somephotorealistic)spreadwidelyontheinternet,con-juringfakecrowdsofIsraelismarchinginsupportoftheIsraeligovernmentorunusualimagesofGazanchildreninthemidstofexplosions;theseimageswererarelylabeledasAI-generated(KleeandRamirez,2023).Imageauthenticityontheinternetisinjeopardy,asAI-generatedimageswithoutproofofprovenance,ortheoriginofagivenimage,areaffectinghowpeopleperceivecurrenteventsandpublicfigures.Theissueisnotonlyadecreaseincontentauthenticitybutalsoalackofknowl-edgeandtoolsamongmanyuserstohelpnavigatethisparadigmshiftintheinformationdomain.OpportunisticactorsaretakingadvantageofaccessibleAItoolstoreducetrustincontentandmedia,particularlyduringtumultu-ousperiods,suchasviolentconflicts.AnexampleofthisistheinflammatoryAI-generatedimagesspawnedafterHamas’sOctober7,2023,attackinIsraelandthesub-sequentconflictinGaza.MultipleAI-generatedimagesSolutionstothisproblemthatfocusoncontentauthen-ticitycouldbethemostvaluablebyprovidingindividualswithmoretransparencyaboutcontentthattheyconsumeand,therefore,moreagencyintermsofhowtointerpretthatcontent.Forexample,solutionsthatprovidemetadatainformationtotheuser,suchasauthorship,geolocationdata,andtoolsusedintheeditingofanimage,canallbeusefulforuserinterpretation.Thoughtheresearchbehindauthenticitymeasuresaffectingusertrustincontentisnotconclusive,andcontentauthenticityisnotasilverbulletforsolvingallAI-drivenharmsontheinternet,contentauthenticitysolutionsareastepintherightdirection.3Image-generationtechnologywillcontinuetoevolve,becomemoreadvanced,andlikelybecomemorephotoreal,increasingtheescalationpotentialforharm.Thereisnofoolproofwaytomakethesemodelsentirelysafeforuse.TostartsolvingissuesofAI-generatedimagesfuelingdis-information,harmfulpropaganda,NCII,andothersafetyharms,theUnitedStatesmustfirstdevelopsolutionsinregardtoauthenticityandimprovethepublic’saccesstoinformationabouttheseimages.ThefirststepinnavigatingthisshiftistocomprehendhowAIimagegenerationtoolsproducesafetyissuesandchallengesandhowcurrentsafeguardsareinsufficienttotackletheproblem.challengesareduetobiasesandharmsfromtrainingdata,theexistenceofopen-sourceimagemodels,andapiece-mealapproachtocontentmoderationattheuserlevel.ThecurrentAIimagegenerationspacemainlyconsistsoftext-to-imagediffusionmodels,suchasMidjourney,DALL-E,andStableDiffusion,thatgenerateimagesbasedonuserprompts.Understandingthefundamentalsofsafetyissuesintext-to-imagediffusionmodelscanshowwhythesemodelscanproduceunsafeimages.Furthermore,divingdeepintotrainingdata,open-sourcemodels,andcontentmoderationrevealsthatthesetechnicalmitigationsaresimplynotenoughtopreventthegenerationofharmfulcontent,andtheUnitedStatesneedsauthenticitysolutionstomanagerisksandharms.Imagegenerationmodelsreflectsocietalandrepresen-tationalbiasesontheinternetbecausetheyaretrainedondatascrapedfromtheinternet.Forexample,therearefarmoreimagessexualizingwomenontheinternetcomparedwithsimilarimagesofmenandfarmoreimagesofmeninprofessionalroles(doctor,lawyer,engineer,etc.)thansimilarimagesofwomen.Imagemodelsconceptualizetheserepresentationalbiasesandhavebecomequiteadeptatgeneratingcontentthatbothoversexualizeswomenandhighlightsmeninprofessionalpositions(Heikkil?,2022).Recentresearchshowsthatthereisstillmuchworktobedonetomakeimagemodelssaferandlessbiased,astherearestillsevereoccupationalbiasesinmodels,whichresultintheexclusionofgroupsofpeoplefromgeneratedresults(NaikandNushi,2023).Furthermore,RestofWorldin2023conductedanexperimentwithMidjourneythatshowedthatthetooltypicallyrepresentednationalitiesusingharmfulstereotypes:Imagesofa“Mexicanperson”mainlyshowedamaninasombrero,andimagesofNewSafetyChallengesinImageGenerationSafetychallengesintheAIimagegenerationlandscapebeginatthetechnicallevel,andthemostsignificantsafetyImagegenerationmodelsreflectsocietalandrepresentationalbiasesbecausetheyaretrainedondatascrapedfromtheinternet.4Delhialmostexclusivelyshowedpollutedstreets(Turk,2023).Theinternetisinherentlybiased,giventhenatureofitshumaninputs:Whenyouscrapehighlybiaseddata,youwillgenerateitaswell.nately,thetrade-offbetweensafetyandwhatconstitutesaqualityproductisoftendifficult.Thediscussionofsafetychallengeswithimagegen-erationwouldbeincompletewithouthighlightingtheopen-sourcespaceforthiscriticaltechnology.Thedebatesaroundopen-sourcegenerativeAIareabundant:Thosewhoareinfavorhighlightthebenefitsofopenaccessinimprovingmodelsandtheirsafetycapabilitieswithwiderresearcheraccess,andthoseagainstarefocusedonthepotentialformalignuseandchallengeswithmonitor-ingandcontrollingmodeluse.Withimagegenerationinparticular,openaccesshasbenefitedmaliciousactorsinremovingsafeguardsandgeneratingharmfulcontentatscalewiththeuseoffine-tunedopen-sourcemodels.Forexample,whenStableDiffusionwasopen-sourcedin2022,UnstableDiffusionwasborn.UnstableDiffusionwasalargeserverthatusedStableDiffusion’sopen-sourcemodelwithreducedsafeguardstocreatenot-suitable-for-workplacecontent(WiggersandSilberling,2022).AnevenmorealarmingexampleisCivitAI,asitecreatedforAIimagegenerationthatallowsuserstobrowsethousandsofmodelsinordertogeneratepornographiccontentandsyntheticchildsexualabuseimagery,streamliningthenonconsensualAIporneconomy(Maiberg,2023a).AmorerecentexampleisaStanfordInternetObservatoryinvesti-gationthatfoundhundredsofimagesofconfirmedchildsexualabusematerial(CSAM)inanopendatasetcalledLAION-5B,whichiscommonlyusedinimagegenerationmodels,suchasStableDiffusion(Thiel,2023).Theuseofopen-sourceimagegenerationposesethicalconcernsrelatedtochildexploitation,consent,andfairuse.Fromaconsentandfairuseperspective,suchusecoulddispro-portionatelyaffectsexworkers,whomayhavetheirimagesSafetyharmsinthesemodelsalsostemfromtrain-ingdata.Tostart,datalabelingislargelyoutsourcedtoprovidersthatspecializeinscaledlabeling,whichiscost-effective.However,thisprocesscanintroducebiasesandinaccuraciesinhumanlabelsfortrainingdata(SmithandRustagi,2020).Whendevelopersscrapedatafromtheinternetforimagegeneration,theirmainmethodtoensurethatmodelsaresafeistofiltertrainingdataandattempttoreducetheprevalenceofharmfulcontentinthetrainingprocess(SmithandRustagi,2020).However,thismethodiscontingentonhoweffectivethesefiltersareinrootingoutharmfulcontentwhileensuringthattheyarenottooconservativeinexcludingtrainingdatasothatthemodelsarestilltrainedonawiderangeofdataandretainqualityandcreativityintheirgenerations.Furthermore,evenwithrobustsafetyfiltering,theabilityformodelstodeducecon-ceptsacrossdifferenttypesofbenignimagescanresultinthegenerationofaharmfulimage.Forexample,animagemodelthatistrainedonimagesofbeachesbutnotonpor-nographywillunderstandhowthehumanbodylookswithswimsuitsorminimalclothing.Thesamemodelcouldalsobetrainedonimagesofchildrengoingtoschool,playingoutside,etc.—allofwhicharebenignimages.Thesemodelcapabilitiescombined(withoutfurthersafetymeasures)willlikelyallowthemodeltocreateimagesofscantilycladorevennudechildrenthroughmalignpromptengineer-ingtechniques.However,mostdeveloperswouldstillwanttrainingdatatohaveimagesofbeachesandchildrentoensurehighqualityandeffectivegenerations.Unfortu-5scrapedfromtheinternetduringthemodeltrainingpro-cess,allowingtheirlikenesstobereproducedsyntheticallywithouttheirconsentandwithoutcompensation.Despitethebenefitsofopen-sourcesoftwareensuringgreateraccesstoimagemodels,ithasacceleratedsafetyharms,especiallythoserelatedtosexualcontentandconsent.Last,itisvaluabletonotethecontentmoderationsifiersthatidentifyviolativepromptsandimageoutputclassifiersthatclassifyimagesthatshouldbeblocked,bothofwhichareusedinDALL-E3topreventharmfulgenerations(David,2023).Althoughsuchmoderationismoreholisticthanjustkeyword-blocking,theseclassifiersarenotfoolproof.Specifically,red-teamersforDALL-E3foundthat(1)themodelrefusalsondisinformationcouldbebypassedbyaskingforstylechanges,(2)themodelcouldproducerealisticimagesoffictitiousevents,and(3)publicfigurescouldbegeneratedbythemodelthroughpromptengineeringandcircumvention(OpenAI,2023).Broadly,contentmoderation,throughbothclassifiersandkeyword-blocking,wasneverbuilttoinfercontext,andanattempttodosoispracticallyanimpossibletask.Forexample,userscanattempttobypassmanykeywordandclassifiersafeguardsbyusingvisualsynonyms,suchas“flesh-coloredeggplant”(foraphallicimage),“redliquid”forblood,and“skin-coloredsheertop”togeneratenudity;thelistbecomesendless.Theseharmsandsafetychallengesareshapingcontentauthenticityontheinternet.Thoughmuchofthespotlightofcontentauthenticityisonmis-anddisinformation,theexistenceofsyntheticNCII,syntheticextremistpropa-ganda,andmorealsodirectlyshapesusers’perceptionsofrealityandcancausegreatharmtoindividualsandsociet-ies.Unfortunately,fromatechnicalperspective,eradicat-ingallthesesafetyharmsseemsunlikely,giventhechal-lengesofcontentmoderation,biasandsafetyissueswithtrainingdata,andtheexistenceoftailor-madeopen-sourcemodels.Instead,theUnitedStatesshouldlooktocreategreatertransparencyaroundtheseimagesthroughsolu-tionsthatpromoteaccesstoinformationabouttheoriginofcontent.Thesesolutionsmaybelessfruitfultomitigatesafeguardsofclassifiersandkeyword-blocking,whichareusedinmanyimagegenerationtools,andtodetailwhytheyareinsufficientforsafety.Safety,asafirstprincipleacrosstheAIdevelopmentlifecycle,isthepathtoensur-ingsafeimagegeneration,startingfromfilteringdatapriortotraining.Whenthemajorityofsafetyworkisdoneinamonitoringcapacitythroughcontentmoderation,harm-fulgenerationsarelikelytofallthroughthecracks.ThecurrentimagegenerationmodelsonthemarketcannotentirelypreventthegenerationofNCIIcontent,syntheticCSAM,extremistpropaganda,andmisinformation.Someofthem,suchasStableDiffusion,relyonaweakformofcontentmoderation:blockingspecifickeywordsfrombeingused.StableDiffusionenactedkeyword-blockingforwordsrelatedtothehumanreproductivesystemtoattempttopreventthegenerationofpornography(Heikkil?,2023).Bing’sAIImageGenerator,poweredbyDALL-E3,blockedthekeywords“twintowers”topreventanyharm-fuldepictionsof9/11(David,2023).Despitekeyword-drivensafeguards,botaccountswereabletospreadanAI-generatedimageofthePentagononfire,whichbrieflywentviral(Bond,2023).Keyword-drivenmoderationisatbestapiecemealsolutionthatonlyhelpstomoderatelow-hangingfruitandcanhardlycoverthemultitudeofharmfulnarrativesatscale.Anotherformofmoderationcanbedonethroughclassifiers,suchaspromptinputclas-6syntheticNCIIcontent,giventhatthecoreofmitigatingNCIIisnotjustdecipheringwhetherthecontentisAI-generatedbutalsoensuringlegalrecourseandaccount-abilityforindividualsandentitiesthatdistributeandcreatesuchcontent.However,authenticitysolutionssuchasprovenanceandwatermarkingcouldhelpdissuadeanddisincentivizemaliciousactorsfromusingtoolsthatadoptthesesafeguardsandhelpdebunkphotorealisticNCII,disinformation,andmuchmore.Framingthisissueasoneofcontentauthenticitycouldempowerusersandputtheonusofresponsibilityintermsofsecuringtransparencyandaccountabilityforsafetyissuesinimagegenerationmodelsontechnologyprovid-ersandgovernmententitiesthatcanenforceregulationstocombattheseharms.Authenticitysolutionssuchasprovenanceandwatermarkingcouldhelpdissuadeanddisincentivizemaliciousactors.determiningwhetheranimageisauthentic,despiteadver-sarialmotives.Auser-centricapproachtothisissueiskey,giventhatthesuccessofcontentauthenticityinitiativescanbeshapedbyuserexperiences,personalbias,and/orbeliefsabouttechnology.IndustrySolutionstoPreserveImageAuthenticityTheissueofpreservingauthenticityontheinternetisnotnew.Fakeaccounts,bots,phishingemails,andmorehavebeenpersistentissuesforyears.Challengeswithdeepfakesstartedtosparkmore-deliberateconversationsaboutphotoandvideoauthenticity;in2017,aReddituserexploitedGoogle’sopen-sourcedeep-learninglibrarytopostpor-nographicface-swapimages(Adee,2020).Now,thefighttopreserveauthenticityhasbecomeevenmorecrucial,givenalackofpolicysafeguardsandsufficientplatform-levelenforcement,aswellasthespeedatwhichAIimagegenerationisimproving.WhenexaminingtheissueofharmscausedbyAI-generatedimagesthroughthelensof“authenticity,”itisimportanttothinkthroughwhatkindsoftoolsandtechnologieswillbestsupportindividualsinWatermarking,Hashing,andDetectionTherehavebeenseveraldebatesabouttherightapproachtoauthenticity.Theconceptofwatermarking
hasdomi-natedtheconversationandisatermthattheWhiteHouseincludedinitsvoluntaryAIcommitmentsforindustry(WhiteHouse,2023a).However,thereisconfusionandalackofconsensusinthefieldaboutthedefinitionofawatermarkandhoweffectivewatermarkscanbe.AhelpfulwaytodefinewatermarkingisprovidedbythePartnershiponAI:aformofdisclosurethatcanbevisibleorinvis-ibletotheuserandincludes“modificationsintoapieceofcontentthatcanhelpsupportinterpretationsofhowthe7oughlyvettedhashesofconfirmedCSAM(NationalCenterforMissingandExploitedChildren,undated)andterroristcontent(GIFCT,undated),respectively.Hashingcanproveusefulintermsofsharingcontentamongsocialmediaplatformsthatveryclearlybelongsinspecificcategoriesofabuse,butitislesspracticalforuseacrossavarietyofcontent,giventhathashingoccursretroactivelyandcannotbedonewellatscale.Itisalsosusceptibletoadversarialattacksandisvulnerabletodatabaseintegrityissuesanddiscrepanciescausedbyhumanreviewinthecontentattri-butionprocess(Ofcom,2022).Last,anapproachthathasbeendiscussedforseveralyearsisdetection.Bothestablishedcompaniesandsmallerstartups—suchasIntel(Clayton,2023),Optic(Kovtun,2023),andRealityDefender(Wiggers,2023)—havepro-duceddeepfakedetectionsolutions.Thoughthetechnologycanbepromising,itcomeswithahostofissues.Tradition-ally,inthecybersecurityspace,detectionandevasionareacat-and-mousegame,withdetectionneedingtoconstantlyimproveasbothadversarialactorsandthetechnologyitselfimprove.RealityDefender’schiefexecutiveofficerclaimsthatprovenanceandwatermarkingsolutionsareweaker,giventhattheyrequirebuy-in,andthatRealityDefender’sproduct,whichisfocusedoninference(determiningtheprobabilityofsomethingbeingfake),isamorerobustsolution(Goode,2023).However,evenwithhighratesofefficacy,theonuswouldstillbeonuserstogaugehowmuchtheyshouldtrustapieceofcontentbasedonaprob-abilitymetricalone.Furthermore,currentimagedetectioncapabilitieshaveaccuracyissues,asreportedinaBellingcatinvestigation.BellingcatassessedatoolbyOptic(called“AIorNot”)anddeterminedthatitwassuccessfulinidentify-ingAI-generatedandrealimagesquiteaccurately,exceptDetectionandevasionareacat-and-mousegame,withdetectionneedingtoconstantlyimproveasbothadversarialactorsandthetechnologyitselfimprove.contentwasgeneratedand/oredited”(PartnershiponAI,2023).Watermarking(bothvisibleandinvisible/metadata-based)canbeausefuldisclosureforthegeneralpublicforcontentinterpretation,butitisfarfromaholisticsolution,giventheneedforwatermarkstoberobustagainstadver-sarialattacks,toovercomechallengestosecurewidespreadadoption,andtobeunderstandablebyaconsumeroruseracrossdifferentsocialplatformsandhardwaredevices.Anotherapproachinthefieldishashing,orfinger-printingimagecontent,whichhappensafteranimageiscreated.Cryptographichashingisusedtodetermineexactmatches,whereasperceptualhashingcanfindsimilarmatchesthatmaynotbeexactlythesameimage(Ofcom,2022).ThemeritsofhashinghavebeenparticularlyevidentintheidentificationofCSAMandterroristcontent.Forexample,theNationalCenterforMissingandExploitedChildrenandtheGlobalInternetForumtoCounterTer-rorism(GIFCT)arenonprofitorganizationsthatusehash-sharingplatformswithtechnologycompaniesofthor-8whenAI-generatedimageswerebothcompressedandpho-torealistic,inwhichcaseaccuracydroppedsignificantly(Kovtun,2023).Imagecompression(arelativelycommonpractice)canassistmalignactorsinevadingdetection,especiallyonsocialmediaplatforms,whichgenerallycom-pressalluploadedimages.Detectiontoolsarenotfoolproofandarefragiletominorperturbations.eration(Zhang,Chapman,andLeFevre,2009).InanAIimagegenerationcontext,provenancecouldmaptheoriginofanimagethroughacryptographichashorsignaturethatisappliedandattachedtothecontent,isstoredsecurelythroughencryption,andis“tamper-evident”orisabletoshowwhethertheimagehasbeenalteredinanyway.Meta-datainformation,availabletousersthroughlabels,couldalsohelpdefinethetrustworthinessofanimage.Inprac-tice,makingprovenanceasuccessgoesfarbeyondacryp-tographicsignatureandrequiresthewidespreadadoptionofoneormanyinteroperableframeworksacrossdifferentmediums:hardware(e.g.,cameras,smartphones),editingsoftware(e.g.,photoeditingprograms,face-swapapps),andpublishingandsharingentities(e.g.,newsmedia,socialmediaplatforms),whichisaverychallengingtaskinpractice.Theindustryecosystemhasralliedaroundtheappli-cationofprovenancetosupportcontentauthenticityeco-systemsforAI-generatedimages.IndustryleadersincludeAdobe,Intel,Microsoft,andTruepic,allofwhicharemembersoftheContentAuthenticityInitiative(CAI)andtheCoalitionforContentProvenanceandAuthenticity(C2PA).Bothgroupsarefocusedoncross-industrypar-ticipationtotackletheissuesofmediatransparencyandcontentprovenance,withtheC2PAframeworkunderlyingmanyoftheseinitiatives,andhavereleasedproductsthatusetheC2PAframework.Forexample,Adobelaunchedits“ContentCredentials”featurein2023,whichusestheC2PAstandardtoallowtheattachmentofsecure,tamper-evidentmetadataonanexportordownload(Quach,2023).TheC2PAframeworkisaninteroperablespecificationthat“enablestheauthorsofprovenancedatatosecurelybindstatementsofprovenancedatatoinstancesofcontentusingProvenanceThoughnoindividualsolutiontocontentauthenticityisholistic,provenance
isemergingasausefultooltopro-activelypreserveoriginmetadataand/oranyeditingorchangestoagivenpieceofcontent.Detectionmethodsriskbeinglessusefulastechnologycontinuestoimproveandevolvewhileplacingtheonusofusingdetectiontoolsonausereverytimetheycomeacrosscontentthattheydeemtobesuspicious.Furthermore,suchmethodscanlackaccu-racy,furtherobfuscatingthedecisionmakingprocessforanindividualtoassesscontentforitsauthenticity.Prove-nanceapproaches,suchasestablishingtheoriginofapieceofcontentthroughsecuredmetadata,aregenerallymorerobust,giventhattheyfocusontheoriginofcontentratherthanprovingwhethersomethingisrealorfake.Further-more,whenimplementedwell,provenancesolutionscanbeincorporatedacrossthecontentsupplychain—inAIimagegenerationtools,socialmediaplatforms,newssites,andmore—sothatmetadatainformationisreadilyavailabletoauserandcancomplementandbeusedintandemwithwatermarkingandfingerprintinginitiatives.Examiningtheoriginalapproachesto
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
- 4. 未經權益所有人同意不得將文件中的內容挪作商業或盈利用途。
- 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
- 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
- 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025《白酒代銷合同范本》
- 2025地平建設合同模板
- 2025國內銷售合同范本全書
- 2025家政服務雇傭合同范本
- 2025電子產品銷售合同書范本
- 《2025房產抵押借款合同》
- 2025YY項目混凝土結構加固施工合同
- 中國第二十冶金建設公司綜合學校高中分校高中英語:八2單元練習題
- 2025年勞動合同解除模板參考
- 2025中級經濟師人力資源管理備考知識點:合同解除
- 野外生存2-1課件
- 學校食堂從業人員培訓測試題
- 辭職報告辭職信
- 中小學校崗位安全工作指導手冊1
- 化工儀表及自動化第六版-課后-答案
- 2021年新湘教版九年級數學中考總復習教案
- DB32∕T 4073-2021 建筑施工承插型盤扣式鋼管支架安全技術規程
- 現代漢語_短語PPT課件
- 分子生物學教學課件:噬菌體調控
- 柳工挖掘機說明書_圖文
- Let-It-Go中英文完整歌詞
評論
0/150
提交評論