What’s new

21.03.20We published evaluation results for FBA Matting [22].
30.01.20We published evaluation results for Matting with Background Estimation [21].
08.12.17We published evaluation results for Information-Flow Matting [20].
07.04.17Subjective study results are now available.
26.12.16We published evaluation results for Self-Adaptive Matting [17].
16.11.16We published evaluation results for Deep Matting [16].
6.04.16We published evaluation results for Sparse Sampling Matting [14].
15.12.151) New sequences with natural hairs including 3 public sequences.
2) New metrics of temporal coherency chosen by careful analysis (see [3]).
3) New trimap generation method (more natural-looking and accurate).
4) Better ground-truth quality owing to correction of lightning changes during capturing.
5) Improvement in website loading speed and interface.
7.09.15We published the paper with our benchmark description [3].
30.12.14We published results for multiple levels of trimaps; use drop-down menu at the top left corner to switch levels.
29.12.14We added general ranking to the rating table.
10.11.14“Sparse codes as Alpha Matte” was added.
26.09.14Source sequences are available for online view now. Full screen mode was added.
30.08.14Composite sequences are available now.
27.08.14“Refine edge tool in Adobe After Effects” was added.
25.08.14The official opening.

Overview

Introduction

The Video­Mat­ting pro­ject is the first pub­lic ob­jec­tive bench­mark for video-mat­ting meth­ods. It con­tains scat­ter plots and rat­ing ta­bles for dif­fer­ent qual­ity met­rics. In ad­di­tion, re­sults for par­tic­i­pat­ing meth­ods are avail­able for view­ing on a player equipped with a mov­able zoom re­gion. We be­lieve our work will help rank ex­ist­ing meth­ods and aid de­vel­op­ers of new meth­ods in im­prov­ing their re­sults.

Datasets

The data set con­sists of five mov­ing ob­jects cap­tured in front of a green plate and seven cap­tured us­ing the stop-mo­tion pro­ce­dure de­scribed be­low. We com­posed the ob­jects over a set of back­ground videos with var­i­ous lev­els of 3D cam­era mo­tion, color bal­ance, and noise. We pub­lished ground-truth data for two stop-mo­tion se­quences and hid the rest to en­sure fair­ness of the com­par­i­son.

Us­ing thresh­old­ing and mor­pho­log­i­cal op­er­a­tions on ground-truth al­pha mattes, we gen­er­ated nar­row trimaps. Then, we di­lated the re­sults us­ing graph­cut-based en­ergy min­i­miza­tion which pro­vides us with more hand­made-look­ing trimaps than com­mon mor­pho­log­i­cal di­la­tion.

Chroma Keying

green screen
stop motion
Alpha mattes from chroma keying and stop-motion capture for the same image region. The stop-motion result is significantly better at preserving details.

Chroma key­ing is a com­mon prac­tice of the cin­ema in­dus­try: the cin­e­matog­ra­pher cap­tures an ac­tor in front of a green or blue screen, then the VFX ex­pert re­places the back­ground us­ing spe­cial soft­ware. Our eval­u­a­tion uses five green-screen video se­quences with a sig­nif­i­cant amount of semi­trans­parency (e.g., hair or mo­tion blur), pro­vided to us by Hol­ly­wood cam­era work. We ex­tract al­pha mattes and cor­re­spond­ing fore­grounds us­ing The Foundry Key­light. Chroma key­ing en­ables us to get al­pha mattes of nat­ural-look­ing ob­jects with ar­bi­trary mo­tion. Nev­er­the­less, this tech­nique can’t guar­an­tee that the al­pha maps are nat­ural, be­cause it as­sumes the screen color is ab­sent from the fore­ground ob­ject. To get al­pha maps that have a very nat­ural ap­pear­ance, we use the stop-mo­tion method.

Stop Motion

One-step capture over different backgrounds. We use checkerboard backgrounds instead of solid ones to eliminate screen reflection.

We de­signed the fol­low­ing pro­ce­dure to per­form stop-mo­tion cap­ture: A fuzzy toy is placed on the plat­form in front of an LCD mon­i­tor. The toy ro­tates in small, dis­crete steps along a pre­de­fined 3D tra­jec­tory, con­trolled by two ser­vos con­nected to a com­puter. Af­ter each step the dig­i­tal cam­era in front of the setup cap­tures the mo­tion­less toy against a set of back­ground im­ages. At the end of this process, the toy is re­moved and the cam­era again cap­tures all of the back­ground im­ages.

We paid spe­cial at­ten­tion to avoid­ing re­flec­tions of the back­ground screen in the fore­ground ob­ject. These re­flec­tions can lead to false trans­parency that is es­pe­cially no­tice­able in non­trans­par­ent re­gions. To re­duce the amount of re­flec­tion we used checker­board back­ground im­ages in­stead of solid col­ors, thereby ad­just­ing the mean color of the screen to be the same for each back­ground.

At the end we cor­rected global light­ing changes caused by light bulb flick­er­ing. Thus fi­nally we ob­tain al­pha mattes with less than 1% of noise level. The de­tailed de­scrip­tion of ground-truth ex­trac­tion meth­ods is given in [3].

Evaluation Methodology

Our com­par­i­son in­cludes both im­age- and video-mat­ting meth­ods. We ap­ply each mat­ting method to the videos in our data set, and then com­pare the re­sults us­ing the fol­low­ing met­rics of per-pixel ac­cu­racy and tem­po­ral co­herency (look into our pa­per [3] for com­par­i­son of dif­fer­ent met­rics):

Here  de­notes to­tal num­ber of pix­els,  and  de­note trans­parency val­ues of video mat­ting un­der con­sid­er­a­tion and ground truth cor­re­spond­ingly at pixel  of frame , and  de­notes mo­tion vec­tor at pixel . We use op­ti­cal-flow al­go­rithm [11] com­puted for ground-truth se­quences. It is worth not­ing that mo­tion-aware met­rics will not give un­fair ad­van­tage to mat­ting meth­ods based on the sim­i­lar mo­tion es­ti­ma­tion method since they do not have ground truth se­quence. The de­tailed de­scrip­tion of used qual­ity met­rics is given in [3].

Public Sequences

For the train­ing pur­poses we pub­lish here three test se­quences with their ground-truth trans­parency maps. De­vel­op­ers and re­searchers are wel­come to use these se­quences, but we ask to cite us [3]

Participate

We in­vite de­vel­op­ers of video-mat­ting meth­ods to use our bench­mark. We will eval­u­ate the sub­mit­ted data and re­port scores to the de­vel­oper. In cases where the de­vel­oper specif­i­cally grants per­mis­sion, we will pub­lish the re­sults on our site. We can also pub­lish anony­mous scores for blind-re­viewed pa­pers. To par­tic­i­pate, sim­ply fol­low these steps:

  1. Download the data set containing our sequences: City, Flowers, Concert, Rain, Snow, Vitaliy, Artem, Slava, Juneau, Woods,
  2. Apply your method to each of our test cases
  3. Upload the alpha and foreground sequences to any file-sharing service. We kindly ask you to maintain these naming and directory-structure conventions. If your method doesn't explicitly produce the foreground images you can skip uploading them; in this case, we will generate them using method proposed in [7].
  4. Fill in this form to provide information about your method

Con­tact us by email with any ques­tions or sug­ges­tions at ques­tions@video­mat­ting.com.

Cite Us

To re­fer to our eval­u­a­tion or test se­quences in your work cite our pa­per [3].

@inproceedings{Erofeev2015,
	title={Perceptually Motivated Benchmark for Video Matting},
	author={Mikhail Erofeev and Yury Gitman and Dmitriy Vatolin and Alexey Fedorov and Jue Wang},
	year={2015},
	month={September},
	pages={99.1-99.12},
	articleno={99},
	numpages={12},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	doi={10.5244/C.29.99},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.99}
}
                

Evaluation

Rating

Trimap size:
Trimap available for each frame
year
rank
city
rain
concert
flowers
snow
Slava
Vitaliy
Artem
juneau
woods
Bayesian Matting [2]200112.658.761451.661210.798144.531330.361353.281343.701370.501496.6912140.1214
Robust Matting [12]20078.237.541026.1549.61488.15718.52833.311034.15746.911178.561180.4110
Refine Edge [15]20139.937.781138.48108.642129.561127.471231.36936.651138.758102.021394.5012
Closed Form [7]20087.931.34631.80710.9610149.481415.86523.67730.15530.60569.94982.1311
Learning Based [13]20095.929.98433.08810.849134.821215.23423.35531.46630.20358.38456.604
Nonlocal mattings [5]201114.754.211392.371533.8615243.471645.131564.051561.751567.3013135.0815171.7915
Shared Matting [4]20108.936.79946.721110.30787.64624.411035.181135.26945.231068.52867.178
Comprehensive Samplings [10]20087.934.48830.18612.101179.60416.17625.02835.25841.05977.721070.189
KNN Matting [1]201213.967.831592.221464.7316112.461032.961461.501450.871479.2015113.4114123.6313
Spectral Mattings [6]201215.586.3216105.241613.1512210.041548.761665.721668.8316109.4416157.1816182.8216
Sparse Samplings [14]20166.833.42733.5099.94581.06518.42723.64635.831038.00765.17664.126
Deep Matting [16]20164.627.33321.40320.331344.93219.23919.99330.00430.55446.18346.392
Self-Adaptive [17]20164.830.05529.3058.02196.44914.32321.96429.92335.97666.05761.495
Information-Flow [20]201710.650.811259.621324.801494.81825.931148.821242.141255.321264.02566.567
Background Matting [21]20202.522.77217.53210.06663.61310.80217.15225.19122.27245.44246.853
FBA Matting [22]20201.317.25116.6918.76340.9419.92115.30127.24219.10128.67130.451
Bayesian Matting [2]200113.673.181467.921316.0611157.281439.591460.371356.051485.5315115.8814169.3314
Robust Matting [12]20079.548.191132.89714.72999.81725.96942.511038.88959.081193.791196.1311
Refine Edge [15]20139.642.231041.741010.452127.021131.401235.08939.461042.918108.711297.7612
Closed Form [7]20087.634.53635.86813.356154.121321.64827.19733.01535.45581.48892.7510
Learning Based [13]20096.233.44537.63913.235138.311219.95726.51534.51634.70468.41563.534
Nonlocal mattings [5]201114.555.041397.851431.8114242.751646.941567.111565.091569.0113144.6615176.6115
Shared Matting [4]20109.341.44956.751114.14796.71627.661142.921140.051158.171083.02978.918
Comprehensive Samplings [10]20086.938.43828.29414.45883.85416.84427.29837.07747.38989.961078.457
KNN Matting [1]201213.579.351598.711566.2416112.811034.601364.171454.441284.3814112.1313128.8613
Spectral Mattings [6]201215.399.0516125.291616.0410214.421559.611674.551677.2516122.8916162.7316188.1616
Sparse Samplings [14]20165.233.20431.92512.34485.52517.66524.64437.78839.87671.05669.945
Deep Matting [16]20164.128.67321.10341.801544.81216.57321.04330.57330.94346.40350.273
Self-Adaptive [17]20166.134.71732.21611.023101.76918.61626.53631.23440.01775.29770.686
Information-Flow [20]201710.554.681267.361227.9313100.44826.931055.241255.541360.151266.35479.339
Background Matting [21]20203.022.97218.15216.251268.75310.85217.08224.61122.74243.41246.862
FBA Matting [22]20201.118.90116.8719.85142.48110.16115.64127.35219.85128.80130.951
Bayesian Matting [2]200114.198.131595.951338.5414170.761459.511580.721567.6413122.0814135.5914179.2514
Robust Matting [12]200710.767.711349.02919.877109.79944.101362.181150.541081.4912118.0912115.9611
Refine Edge [15]20139.150.31751.811015.854125.251140.731245.38946.43952.227118.8813103.309
Closed Form [7]20087.845.04641.69624.199159.471330.09936.85839.20446.975104.078110.2610
Learning Based [13]20096.444.97543.75721.908143.541226.50633.89640.44645.93490.35681.224
Nonlocal mattings [5]201113.756.7610113.901533.9713241.311650.741477.141468.371476.4311159.0815181.1415
Shared Matting [4]20109.755.84979.091119.786107.45832.941060.781051.4711122.3315107.34997.228
Comprehensive Samplings [10]20086.956.791137.36516.96587.64424.04534.32743.69764.639110.501093.356
KNN Matting [1]201212.795.8914112.701466.1216113.101037.181171.261360.031298.4213110.8711138.6613
Spectral Mattings [6]201215.3116.8716157.371626.1210218.941576.221688.741691.6916148.7016169.3316193.4616
Sparse Samplings [14]20164.843.32434.66414.78389.79519.23429.92444.59851.94685.44581.775
Deep Matting [16]20164.031.29323.16253.021544.93218.20324.36332.04337.60350.48354.763
Self-Adaptive [17]20166.451.29846.56814.352106.71727.55731.98539.35554.80895.90795.737
Information-Flow [20]201710.366.861286.011230.9912104.14629.25864.321282.921569.331069.714117.3812
Background Matting [21]20203.025.77223.62329.331175.90311.74218.31224.83124.97243.92250.512
FBA Matting [22]20201.122.02119.82111.94144.62111.21117.05128.11222.14129.79133.061
Bayesian Matting [2]200113.835.931465.24129.2810104.681527.611335.041646.411360.341588.4615121.9115
Robust Matting [12]20078.222.32932.7347.26447.271316.04818.721034.47724.96853.421142.408
Refine Edge [15]20137.214.41244.0896.30340.69818.361016.53634.93817.83453.431243.6010
Closed Form [7]20086.216.37441.2977.67635.14714.59516.40532.43517.37351.40847.7112
Learning Based [13]20095.817.29642.9987.88834.03514.44417.07834.37618.37546.02439.464
Nonlocal mattings [5]201114.629.0713110.761414.1015120.011629.821532.691459.911536.2214108.431696.8214
Shared Matting [4]20109.222.721059.59117.72745.921220.341122.351138.021129.561147.45538.193
Comprehensive Samplings [10]20088.922.781139.97613.831433.11415.61617.51937.09925.72953.331045.0411
KNN Matting [1]201213.736.1615115.081525.581644.091027.801431.611356.081434.801387.531475.3613
Spectral Mattings [6]201213.965.5016131.74164.781103.611439.031634.391563.211664.401687.4413122.5316
Sparse Samplings [14]20168.921.78844.85109.861134.07616.94916.55737.401025.841051.77942.989
Deep Matting [16]20164.716.79525.93313.121326.74215.64714.58331.85319.25645.03337.272
Self-Adaptive [17]20165.818.76738.8957.37545.751113.36315.44432.26420.12749.15640.486
Information-Flow [20]201710.828.001278.551310.641241.13922.561228.711242.891229.751249.89741.447
Background Matting [21]20202.415.00321.4426.25232.74310.53213.40227.79116.29242.39240.105
FBA Matting [22]20201.912.65119.5518.79923.72110.30112.32128.52214.82129.27128.061
Bayesian Matting [2]200114.246.961577.271211.3210111.601533.851539.541656.861275.5016102.7015151.6016
Robust Matting [12]20079.326.361037.7358.35652.071319.18921.451037.08828.551057.781247.9510
Refine Edge [15]20136.415.24347.35106.58240.54819.401016.77535.74618.24454.111043.436
Closed Form [7]20086.817.25542.8087.11435.79716.70817.10733.94518.08352.82948.9412
Learning Based [13]20095.918.97645.1497.45534.62316.32718.09836.77719.50648.00439.164
Nonlocal mattings [5]201114.129.6712114.941414.8413121.991631.031433.661461.341537.0413117.021693.1814
Shared Matting [4]20109.824.21968.25119.00849.951221.881124.251142.741137.141451.50639.515
Comprehensive Samplings [10]20088.826.801135.36415.591434.80515.73418.29939.171027.81955.931148.7311
KNN Matting [1]201213.238.8514120.081526.541644.01928.251331.831357.581335.781285.371475.3413
Spectral Mattings [6]201213.772.4516155.21165.031104.951444.501635.841564.961666.661585.2313130.3215
Sparse Samplings [14]20167.223.66839.99611.541135.17616.17616.48438.93927.46851.93744.557
Deep Matting [16]20164.317.21424.74321.031527.68214.33314.98331.41319.05545.10336.962
Self-Adaptive [17]20167.020.61740.2078.93746.731115.80517.03632.95421.69752.15844.908
Information-Flow [20]201711.134.101386.711313.951245.991023.261231.001261.241430.421149.38544.999
Background Matting [21]20202.314.92222.5026.91334.67410.69213.21227.25116.16240.04238.863
FBA Matting [22]20201.913.67119.9819.62924.32110.47112.41128.71215.11129.15128.291
Bayesian Matting [2]200115.268.3215110.781329.7016119.541547.661552.241667.6015118.011698.3415148.1716
Robust Matting [12]20079.832.721153.1289.85556.831325.401227.411042.73836.491162.551252.278
Refine Edge [15]20135.815.82357.57106.58240.25820.67918.04437.30619.11355.24744.186
Closed Form [7]20085.918.75548.3067.76336.78619.57718.21635.50419.15457.13953.959
Learning Based [13]20095.621.35650.9177.84435.59318.92519.37839.75721.31651.66541.995
Nonlocal mattings [5]201113.731.2510129.131417.9612123.491631.741436.131562.321338.8213127.061691.3714
Shared Matting [4]201010.227.95993.581110.60955.321225.041127.721148.281165.061458.631041.344
Comprehensive Samplings [10]20088.834.621242.66516.531135.69419.23621.26944.331033.281060.011155.6110
KNN Matting [1]201212.842.7114130.141526.501543.95929.191333.061259.221238.041282.471376.1613
Spectral Mattings [6]201213.581.1416183.76165.491108.671454.311635.811467.311470.451582.7414128.4115
Sparse Samplings [14]20166.827.39841.55412.151036.95717.04418.19543.94932.67854.81649.267
Deep Matting [16]20164.118.72428.24223.651428.18215.36316.66331.80320.98547.83336.952
Self-Adaptive [17]20167.925.66756.94910.23649.291019.97818.71737.00525.31755.89857.1112
Information-Flow [20]201711.241.0913106.611218.031349.301124.591035.491394.921632.86949.22456.9211
Background Matting [21]20202.915.57228.72310.58836.50511.33113.56227.23116.71239.75238.143
FBA Matting [22]20201.814.43123.68110.55724.98111.49212.93129.29215.85129.64129.221
Bayesian Matting [2]200113.81.85143.58120.121113.33140.74142.31161.47134.51158.011416.2315
Robust Matting [12]20079.50.89110.9440.0744.82130.2590.88100.7781.58123.89123.9212
Refine Edge [15]20138.00.5561.48100.0623.99100.34100.5490.7070.8063.80113.169
Closed Form [7]20085.70.4931.3580.0982.8840.1950.3560.6130.6142.7563.3810
Learning Based [13]20095.60.5051.4290.0992.8850.1840.3450.6860.6452.3742.434
Nonlocal mattings [5]201114.91.56138.17150.701520.97161.11151.88152.51162.351411.171613.3714
Shared Matting [4]201010.10.79102.73110.0974.50120.35111.00120.91111.43112.9482.948
Comprehensive Samplings [10]20088.80.7491.0650.19123.1570.2270.3980.86101.2493.65103.4011
KNN Matting [1]201213.22.28156.40141.22163.4780.61131.02131.73141.94136.42137.7313
Spectral Mattings [6]201214.57.031613.25160.08615.36152.09161.84142.45156.59168.711519.9716
Sparse Samplings [14]20167.90.6681.3070.11103.0960.2580.3870.8691.0482.9492.767
Deep Matting [16]20164.60.5040.4830.32141.1520.2260.2530.6350.5731.8431.843
Self-Adaptive [17]20165.60.5871.1560.0614.43110.1630.3340.6240.8772.9172.666
Information-Flow [20]201710.21.10123.94130.27133.9090.41120.97111.10121.38102.4652.615
Background Matting [21]20202.30.3820.3220.0852.1130.1020.2020.4310.3521.6221.822
FBA Matting [22]20201.30.2310.3010.0730.9210.0910.1510.5020.2610.7710.871
Bayesian Matting [2]200114.63.12155.07130.221116.65151.34153.03162.501311.601618.031625.6316
Robust Matting [12]200710.71.34111.2660.1576.19130.50121.42120.99102.38125.21125.4412
Refine Edge [15]20137.00.6651.62100.0813.9590.4290.6390.7660.9264.05103.165
Closed Form [7]20086.10.6141.4680.1143.2450.2780.4570.6950.8043.2963.9310
Learning Based [13]20095.50.6661.5990.1153.2240.2660.4240.7870.8452.9652.874
Nonlocal mattings [5]201114.31.62138.93150.781421.62161.22142.06142.77152.501313.521512.3014
Shared Matting [4]201010.81.0393.51110.1795.77120.48111.51131.62112.57144.0493.789
Comprehensive Samplings [10]20088.91.09100.9540.26123.6380.2770.4880.9881.71104.65114.4211
KNN Matting [1]201212.33.05146.83141.32163.4960.70131.11111.96122.28116.30138.5513
Spectral Mattings [6]201214.09.361620.54160.11316.55143.15162.36152.91167.89158.871422.4415
Sparse Samplings [14]20167.20.7981.1850.18103.5270.2650.4460.9891.2983.3973.367
Deep Matting [16]20164.10.5630.4731.05151.1820.1930.2730.6730.5731.9031.993
Self-Adaptive [17]20166.70.7671.3470.1264.94110.2440.4350.6941.1273.6683.628
Information-Flow [20]201710.01.62124.61120.42134.72100.46101.10102.60141.5492.5643.196
Background Matting [21]20202.60.3920.3720.1682.5230.1120.2020.4210.3521.5121.852
FBA Matting [22]20201.20.2810.3110.0920.9710.0910.1610.5120.2810.7810.921
Bayesian Matting [2]200115.46.711511.09143.971619.88153.00155.18164.081537.451616.501625.0816
Robust Matting [12]200711.72.40122.54100.2377.65131.18132.70141.57104.16137.26137.1512
Refine Edge [15]20135.70.8142.1880.1213.9170.5480.8490.9051.1444.5173.394
Closed Form [7]20086.10.8851.7060.2483.6460.4660.7270.8541.2054.3665.218
Learning Based [13]20095.51.0361.8870.2153.6250.4350.6141.0071.3264.2753.745
Nonlocal mattings [5]201113.21.761011.85151.101322.25161.29142.46123.00132.881016.361512.2014
Shared Matting [4]201010.81.7496.02110.2797.46120.85112.70132.26117.65146.35115.057
Comprehensive Samplings [10]20089.12.28111.5550.32103.9780.5470.7381.3583.01116.59126.1111
KNN Matting [1]201211.44.24147.84131.32143.5240.88121.38102.34123.04126.17109.8713
Spectral Mattings [6]201213.813.171631.38160.18318.16145.33162.78153.741410.24159.191422.9215
Sparse Samplings [14]20166.71.3771.3240.2364.0290.3540.6551.4392.1994.6784.666
Deep Matting [16]20164.00.6430.6021.34151.2220.2230.3730.7530.7832.2732.213
Self-Adaptive [17]20168.01.4282.4990.1945.48110.5590.6660.9961.9485.2096.0410
Information-Flow [20]201710.42.67136.25120.71125.32100.57101.54115.32161.9272.7745.839
Background Matting [21]20203.00.4720.6530.49112.9830.1220.2220.4310.4121.5622.022
FBA Matting [22]20201.20.3510.4310.1421.0510.1210.1910.5320.3410.8211.061
  1. city
  2. rain
  3. concert
  4. flowers
  5. snow
  6. Slava
  7. Vitaliy
  8. Artem
  9. juneau
  10. woods
  1. Source
  2. Trimap
  3. BM [2]
  4. RM [12]
  5. RE [15]
  6. CF [7]
  7. LB [13]
  8. NlM [5]
  9. ShM [4]
  10. CS [10]
  11. KNN [1]
  12. SpM [6]
  13. SpSM [14]
  14. DM [16]
  15. SAM [17]
  16. IFM [20]
  17. MWBE [21]
  18. FBAM [22]
0 %
 
Note: Make sure you are using the latest version of your web browser (we recommend to use chromium-based web browsers)

Integral Plots

Subjective comparison

We carried out subjective comparison of 13 matting methods using Subjectify.us platform. We applied matting methods to videos from our dataset and then uploaded videos containing extracted foreground objects and ground-truth sequences to Subjectify.us. The platform hired study participants and showed them these videos in pairwise fashion. For each pair, participants were asked to choose the video with better visual quality or indicate that they are approximately equal. Each study participant compared 30 pairs including 4 hidden quality-control comparisons between ground truth and a low-quality method; answers of 23 participants were rejected, since they failed at least one quality-control question. In total 10556 answers from 406 participants were collected. Bradley-Terry [18] and Crowd Bradley-Terry [19] models were used to convert pairwise comparisons to subjective ranks. The study report generated by the platform is shown below.

Multidimensional Analysis

References

[1]Qifeng Chen, Dingzeyu Li, and Chi-Ke­ung Tang. KNN mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 35(9):2175–2188, 2013. [ doi ,  pro­ject page ]
[2]Yung-Yu Chuang, Brian Cur­less, David H. Salesin, and Richard Szeliski. A bayesian ap­proach to dig­i­tal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), vol­ume 2, pages II-264–II-271, 2001. [ doi ,  pro­ject page ,  code ]
[3]Mikhail Ero­feev, Yury Git­man, Dmitriy Va­tolin, Alexey Fe­dorov, Jue Wang. Per­cep­tu­ally Mo­ti­vated Bench­mark for Video Mat­ting. British Ma­chine Vi­sion Con­fer­ence (BMVC), pages 99.1–99.12, 2015. [ doi ,  pdf ,  pro­ject page ]
[4]Ed­uardo S.L. Gastal and Manuel M. Oliveira. Shared sam­pling for real-time al­pha mat­ting. Com­puter Graph­ics Fo­rum, 29(2):575–584, 2010. [ pro­ject page ]
[5]Philip Lee and Ying Wu. Non­lo­cal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 2193–2200, 2011. [ code ]
[6]A. Levin, A. Rav Acha, and D. Lischin­ski. Spec­tral mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(10):1699–1712, 2008. [ doi ,  pro­ject page ]
[7]Anat Levin, Dani Lischin­ski, and Yair Weiss. A closed-form so­lu­tion to nat­ural im­age mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(2):228–242, 2008. [ doi ,  code ]
[8]Christoph Rhe­mann, Carsten Rother, Jue Wang, Margrit Gelautz, Push­meet Kohli, and Pamela Rott. A per­cep­tu­ally mo­ti­vated on­line bench­mark for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1826–1833, 2009. [ doi ]
[9]E. Shahrian and D. Ra­jan. Weighted color and tex­ture sam­ple se­lec­tion for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 718–725, 2012. [ doi ,  code ]
[10]E. Shahrian, D. Ra­jan, B. Price, and S. Co­hen. Im­prov­ing im­age mat­ting us­ing com­pre­hen­sive sam­pling sets. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 636–643, 2013. [ doi ,  code ]
[11]Karen Symonyan, Sergey Gr­ishin, Dmitriy Va­tolin, and Dmitriy Popov. Fast video su­per­res­o­lu­tion via clas­si­fi­ca­tion. In­ter­na­tional Con­fer­ence on Im­age Pro­cess­ing (ICIP), pages 349–352, 2008. [ doi ]
[12]Jue Wang and Michael F. Co­hen. Op­ti­mized color sam­pling for ro­bust mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1–8, 2007. [ doi ,  pro­ject page ]
[13]Yuan­jie Zheng and C. Kamb­hamettu. Learn­ing based dig­i­tal mat­ting. In In­ter­na­tional Con­fer­ence on Com­puter Vi­sion (ICCV), pages 889–896, 2009. [ doi ,  code ]
[14]Lev­ent Kara­can, Aykut Er­dem, Erkut Er­dem. Al­pha Mat­ting with KL-Di­ver­gence Based Sparse Sam­pling, IEEE Trans­ac­tions on Im­age Pro­cess­ing, 2017.
[15]http://​www.adobe.com/​en/​prod­ucts/​af­ter­ef­fects.html, Re­fine Edge tool in Adobe Af­ter Ef­fects CC.
[16]Ning Xu, Brian Price, Scott Co­hen, and Thomas Huang. Deep Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017
[17]Guangy­ing Cao, Jian­wei Li, Xi­aowu Chen, Zhiqiang He. Patch-based self-adap­tive mat­ting for high-res­o­lu­tion im­age and video, The Vi­sual Com­puter, 1-15, 2017.
[18]Ralph Al­lan Bradley and Mil­ton E Terry. Rank analy­sis of in­com­plete block de­signs: I. the method of paired com­par­isons. Bio­metrika, 39(3/​4):324–345, 1952.
[19]Chen, Xi, et al. Pair­wise rank­ing ag­gre­ga­tion in a crowd­sourced set­ting. Pro­ceed­ings of the sixth ACM in­ter­na­tional con­fer­ence on Web search and data min­ing. ACM, 2013.
[20]Ya­giz Ak­soy, Tunc Ozan Ay­din and Marc Polle­feys. De­sign­ing Ef­fec­tive In­ter-Pixel In­for­ma­tion Flow for Nat­ural Im­age Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017. [ doi ,  code ]
[21]Mat­ting with Back­ground Es­ti­ma­tion: A Novel Method for Ex­tract­ing Clean Fore­ground, IEEE Trans­ac­tion on Im­age Pro­cess­ing 2020 (anony­mous sub­mis­sion)
[22]F, B, Al­pha Mat­ting. Anony­mous ECCV 2020 sub­mis­sion #6826
Up