What’s new

21.03.20We published evaluation results for FBA Matting [22].
30.01.20We published evaluation results for Matting with Background Estimation [21].
08.12.17We published evaluation results for Information-Flow Matting [20].
07.04.17Subjective study results are now available.
26.12.16We published evaluation results for Self-Adaptive Matting [17].
16.11.16We published evaluation results for Deep Matting [16].
6.04.16We published evaluation results for Sparse Sampling Matting [14].
15.12.151) New sequences with natural hairs including 3 public sequences.
2) New metrics of temporal coherency chosen by careful analysis (see [3]).
3) New trimap generation method (more natural-looking and accurate).
4) Better ground-truth quality owing to correction of lightning changes during capturing.
5) Improvement in website loading speed and interface.
7.09.15We published the paper with our benchmark description [3].
30.12.14We published results for multiple levels of trimaps; use drop-down menu at the top left corner to switch levels.
29.12.14We added general ranking to the rating table.
10.11.14“Sparse codes as Alpha Matte” was added.
26.09.14Source sequences are available for online view now. Full screen mode was added.
30.08.14Composite sequences are available now.
27.08.14“Refine edge tool in Adobe After Effects” was added.
25.08.14The official opening.

Overview

Introduction

The Video­Mat­ting pro­ject is the first pub­lic ob­jec­tive bench­mark for video-mat­ting meth­ods. It con­tains scat­ter plots and rat­ing ta­bles for dif­fer­ent qual­ity met­rics. In ad­di­tion, re­sults for par­tic­i­pat­ing meth­ods are avail­able for view­ing on a player equipped with a mov­able zoom re­gion. We be­lieve our work will help rank ex­ist­ing meth­ods and aid de­vel­op­ers of new meth­ods in im­prov­ing their re­sults.

Datasets

The data set con­sists of five mov­ing ob­jects cap­tured in front of a green plate and seven cap­tured us­ing the stop-mo­tion pro­ce­dure de­scribed be­low. We com­posed the ob­jects over a set of back­ground videos with var­i­ous lev­els of 3D cam­era mo­tion, color bal­ance, and noise. We pub­lished ground-truth data for two stop-mo­tion se­quences and hid the rest to en­sure fair­ness of the com­par­i­son.

Us­ing thresh­old­ing and mor­pho­log­i­cal op­er­a­tions on ground-truth al­pha mattes, we gen­er­ated nar­row trimaps. Then, we di­lated the re­sults us­ing graph­cut-based en­ergy min­i­miza­tion which pro­vides us with more hand­made-look­ing trimaps than com­mon mor­pho­log­i­cal di­la­tion.

Chroma Keying

green screen
stop motion
Alpha mattes from chroma keying and stop-motion capture for the same image region. The stop-motion result is significantly better at preserving details.

Chroma key­ing is a com­mon prac­tice of the cin­ema in­dus­try: the cin­e­matog­ra­pher cap­tures an ac­tor in front of a green or blue screen, then the VFX ex­pert re­places the back­ground us­ing spe­cial soft­ware. Our eval­u­a­tion uses five green-screen video se­quences with a sig­nif­i­cant amount of semi­trans­parency (e.g., hair or mo­tion blur), pro­vided to us by Hol­ly­wood cam­era work. We ex­tract al­pha mattes and cor­re­spond­ing fore­grounds us­ing The Foundry Key­light. Chroma key­ing en­ables us to get al­pha mattes of nat­ural-look­ing ob­jects with ar­bi­trary mo­tion. Nev­er­the­less, this tech­nique can’t guar­an­tee that the al­pha maps are nat­ural, be­cause it as­sumes the screen color is ab­sent from the fore­ground ob­ject. To get al­pha maps that have a very nat­ural ap­pear­ance, we use the stop-mo­tion method.

Stop Motion

One-step capture over different backgrounds. We use checkerboard backgrounds instead of solid ones to eliminate screen reflection.

We de­signed the fol­low­ing pro­ce­dure to per­form stop-mo­tion cap­ture: A fuzzy toy is placed on the plat­form in front of an LCD mon­i­tor. The toy ro­tates in small, dis­crete steps along a pre­de­fined 3D tra­jec­tory, con­trolled by two ser­vos con­nected to a com­puter. Af­ter each step the dig­i­tal cam­era in front of the setup cap­tures the mo­tion­less toy against a set of back­ground im­ages. At the end of this process, the toy is re­moved and the cam­era again cap­tures all of the back­ground im­ages.

We paid spe­cial at­ten­tion to avoid­ing re­flec­tions of the back­ground screen in the fore­ground ob­ject. These re­flec­tions can lead to false trans­parency that is es­pe­cially no­tice­able in non­trans­par­ent re­gions. To re­duce the amount of re­flec­tion we used checker­board back­ground im­ages in­stead of solid col­ors, thereby ad­just­ing the mean color of the screen to be the same for each back­ground.

At the end we cor­rected global light­ing changes caused by light bulb flick­er­ing. Thus fi­nally we ob­tain al­pha mattes with less than 1% of noise level. The de­tailed de­scrip­tion of ground-truth ex­trac­tion meth­ods is given in [3].

Evaluation Methodology

Our com­par­i­son in­cludes both im­age- and video-mat­ting meth­ods. We ap­ply each mat­ting method to the videos in our data set, and then com­pare the re­sults us­ing the fol­low­ing met­rics of per-pixel ac­cu­racy and tem­po­ral co­herency (look into our pa­per [3] for com­par­i­son of dif­fer­ent met­rics):

Here  de­notes to­tal num­ber of pix­els,  and  de­note trans­parency val­ues of video mat­ting un­der con­sid­er­a­tion and ground truth cor­re­spond­ingly at pixel  of frame , and  de­notes mo­tion vec­tor at pixel . We use op­ti­cal-flow al­go­rithm [11] com­puted for ground-truth se­quences. It is worth not­ing that mo­tion-aware met­rics will not give un­fair ad­van­tage to mat­ting meth­ods based on the sim­i­lar mo­tion es­ti­ma­tion method since they do not have ground truth se­quence. The de­tailed de­scrip­tion of used qual­ity met­rics is given in [3].

Public Sequences

For the train­ing pur­poses we pub­lish here three test se­quences with their ground-truth trans­parency maps. De­vel­op­ers and re­searchers are wel­come to use these se­quences, but we ask to cite us [3]

Participate

We in­vite de­vel­op­ers of video-mat­ting meth­ods to use our bench­mark. We will eval­u­ate the sub­mit­ted data and re­port scores to the de­vel­oper. In cases where the de­vel­oper specif­i­cally grants per­mis­sion, we will pub­lish the re­sults on our site. We can also pub­lish anony­mous scores for blind-re­viewed pa­pers. To par­tic­i­pate, sim­ply fol­low these steps:

  1. Download the data set containing our sequences: City, Flowers, Concert, Rain, Snow, Vitaliy, Artem, Slava, Juneau, Woods,
  2. Apply your method to each of our test cases
  3. Upload the alpha and foreground sequences to any file-sharing service. We kindly ask you to maintain these naming and directory-structure conventions. If your method doesn't explicitly produce the foreground images you can skip uploading them; in this case, we will generate them using method proposed in [7].
  4. Fill in this form to provide information about your method

Con­tact us by email with any ques­tions or sug­ges­tions at ques­tions@video­mat­ting.com.

Cite Us

To re­fer to our eval­u­a­tion or test se­quences in your work cite our pa­per [3].

@inproceedings{Erofeev2015,
	title={Perceptually Motivated Benchmark for Video Matting},
	author={Mikhail Erofeev and Yury Gitman and Dmitriy Vatolin and Alexey Fedorov and Jue Wang},
	year={2015},
	month={September},
	pages={99.1-99.12},
	articleno={99},
	numpages={12},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	doi={10.5244/C.29.99},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.99}
}
                

Evaluation

Rating

Trimap size:
Trimap available for each frame
year
rank
city
rain
concert
flowers
snow
Slava
Vitaliy
Artem
juneau
woods
Bayesian Matting [2]200112.658.761451.661210.798144.531330.361353.281343.701370.501496.6912140.1214
Robust Matting [12]20078.237.541026.1549.61488.15718.52833.311034.15746.911178.561180.4110
Refine Edge [15]20139.937.781138.48108.642129.561127.471231.36936.651138.758102.021394.5012
Closed Form [7]20087.931.34631.80710.9610149.481415.86523.67730.15530.60569.94982.1311
Learning Based [13]20095.929.98433.08810.849134.821215.23423.35531.46630.20358.38456.604
Nonlocal mattings [5]201114.754.211392.371533.8615243.471645.131564.051561.751567.3013135.0815171.7915
Shared Matting [4]20108.936.79946.721110.30787.64624.411035.181135.26945.231068.52867.178
Comprehensive Samplings [10]20087.934.48830.18612.101179.60416.17625.02835.25841.05977.721070.189
KNN Matting [1]201213.967.831592.221464.7316112.461032.961461.501450.871479.2015113.4114123.6313
Spectral Mattings [6]201215.586.3216105.241613.1512210.041548.761665.721668.8316109.4416157.1816182.8216
Sparse Samplings [14]20166.833.42733.5099.94581.06518.42723.64635.831038.00765.17664.126
Deep Matting [16]20164.627.33321.40320.331344.93219.23919.99330.00430.55446.18346.392
Self-Adaptive [17]20164.830.05529.3058.02196.44914.32321.96429.92335.97666.05761.495
Information-Flow [20]201710.650.811259.621324.801494.81825.931148.821242.141255.321264.02566.567
Background Matting [21]20202.522.77217.53210.06663.61310.80217.15225.19122.27245.44246.853
FBA Matting [22]20201.317.25116.6918.76340.9419.92115.30127.24219.10128.67130.451
Bayesian Matting [2]200112.558.761451.661210.797144.531330.361353.281343.701370.501496.6912140.1214
Robust Matting [12]20078.137.541026.1549.61388.15718.52833.311034.15746.911178.561180.4110
Refine Edge [15]20139.937.781138.48108.642129.561127.471231.36936.651138.758102.021394.5012
Closed Form [7]20087.831.34631.80710.969149.481415.86523.67730.15530.60569.94982.1311
Learning Based [13]20095.829.98433.08810.848134.821215.23423.35531.46630.20358.38456.604
Nonlocal mattings [5]201114.754.211392.371533.8615243.471645.131564.051561.751567.3013135.0815171.7915
Shared Matting [4]20108.836.79946.721110.30687.64624.411035.181135.26945.231068.52867.178
Comprehensive Samplings [10]20087.834.48830.18612.101079.60416.17625.02835.25841.05977.721070.189
KNN Matting [1]201213.967.831592.221464.7316112.461032.961461.501450.871479.2015113.4114123.6313
Spectral Mattings [6]201215.486.3216105.241613.1511210.041548.761665.721668.8316109.4416157.1816182.8216
Sparse Samplings [14]20166.833.42733.5099.94581.06518.42723.64635.831038.00765.17664.126
Deep Matting [16]20164.627.33321.40320.331344.93219.23919.99330.00430.55446.18346.392
Self-Adaptive [17]20164.830.05529.3058.02196.44914.32321.96429.92335.97666.05761.495
Information-Flow [20]201710.650.811259.621324.801494.81825.931148.821242.141255.321264.02566.567
Background Matting [21]20203.122.97218.15216.251268.75310.85217.08224.61122.74243.41246.863
FBA Matting [22]20201.418.90116.8719.85442.48110.16115.64127.35219.85128.80130.951
Bayesian Matting [2]200112.458.761451.661210.796144.531330.361353.281343.701370.501496.6912140.1214
Robust Matting [12]20078.137.541026.1549.61388.15718.52833.311034.15746.911178.561180.4110
Refine Edge [15]20139.937.781138.48108.642129.561127.471231.36936.651138.758102.021394.5012
Closed Form [7]20087.731.34631.80710.968149.481415.86523.67730.15530.60569.94982.1311
Learning Based [13]20095.729.98433.08810.847134.821215.23423.35531.46630.20358.38456.604
Nonlocal mattings [5]201114.754.211392.371533.8615243.471645.131564.051561.751567.3013135.0815171.7915
Shared Matting [4]20108.736.79946.721110.30587.64624.411035.181135.26945.231068.52867.178
Comprehensive Samplings [10]20087.834.48830.18612.101079.60416.17625.02835.25841.05977.721070.189
KNN Matting [1]201213.967.831592.221464.7316112.461032.961461.501450.871479.2015113.4114123.6313
Spectral Mattings [6]201215.486.3216105.241613.1511210.041548.761665.721668.8316109.4416157.1816182.8216
Sparse Samplings [14]20166.733.42733.5099.94481.06518.42723.64635.831038.00765.17664.126
Deep Matting [16]20164.427.33321.40220.331244.93219.23919.99330.00430.55446.18346.392
Self-Adaptive [17]20164.830.05529.3058.02196.44914.32321.96429.92335.97666.05761.495
Information-Flow [20]201710.550.811259.621324.801394.81825.931148.821242.141255.321264.02566.567
Background Matting [21]20203.425.77223.62329.331475.90311.74218.31224.83124.97243.92250.513
FBA Matting [22]20201.922.02119.82111.94944.62111.21117.05128.11222.14129.79133.061
Bayesian Matting [2]200113.835.931465.24129.2810104.681527.611335.041646.411360.341588.4615121.9115
Robust Matting [12]20078.222.32932.7347.26447.271316.04818.721034.47724.96853.421142.408
Refine Edge [15]20137.214.41244.0896.30340.69818.361016.53634.93817.83453.431243.6010
Closed Form [7]20086.216.37441.2977.67635.14714.59516.40532.43517.37351.40847.7112
Learning Based [13]20095.817.29642.9987.88834.03514.44417.07834.37618.37546.02439.464
Nonlocal mattings [5]201114.629.0713110.761414.1015120.011629.821532.691459.911536.2214108.431696.8214
Shared Matting [4]20109.222.721059.59117.72745.921220.341122.351138.021129.561147.45538.193
Comprehensive Samplings [10]20088.922.781139.97613.831433.11415.61617.51937.09925.72953.331045.0411
KNN Matting [1]201213.736.1615115.081525.581644.091027.801431.611356.081434.801387.531475.3613
Spectral Mattings [6]201213.965.5016131.74164.781103.611439.031634.391563.211664.401687.4413122.5316
Sparse Samplings [14]20168.921.78844.85109.861134.07616.94916.55737.401025.841051.77942.989
Deep Matting [16]20164.716.79525.93313.121326.74215.64714.58331.85319.25645.03337.272
Self-Adaptive [17]20165.818.76738.8957.37545.751113.36315.44432.26420.12749.15640.486
Information-Flow [20]201710.828.001278.551310.641241.13922.561228.711242.891229.751249.89741.447
Background Matting [21]20202.415.00321.4426.25232.74310.53213.40227.79116.29242.39240.105
FBA Matting [22]20201.912.65119.5518.79923.72110.30112.32128.52214.82129.27128.061
Bayesian Matting [2]200113.735.931465.24129.289104.681527.611335.041646.411360.341588.4615121.9115
Robust Matting [12]20078.222.32932.7347.26447.271316.04818.721034.47724.96853.421142.408
Refine Edge [15]20137.114.41244.0896.30240.69818.361016.53634.93817.83453.431243.6010
Closed Form [7]20086.216.37441.2977.67635.14714.59516.40532.43517.37351.40847.7112
Learning Based [13]20095.817.29642.9987.88834.03414.44417.07834.37618.37546.02439.465
Nonlocal mattings [5]201114.629.0713110.761414.1015120.011629.821532.691459.911536.2214108.431696.8214
Shared Matting [4]20109.222.721059.59117.72745.921220.341122.351138.021129.561147.45538.193
Comprehensive Samplings [10]20088.822.781139.97613.831433.11315.61617.51937.09925.72953.331045.0411
KNN Matting [1]201213.736.1615115.081525.581644.091027.801431.611356.081434.801387.531475.3613
Spectral Mattings [6]201213.965.5016131.74164.781103.611439.031634.391563.211664.401687.4413122.5316
Sparse Samplings [14]20168.821.78844.85109.861134.07516.94916.55737.401025.841051.77942.989
Deep Matting [16]20164.716.79525.93313.121326.74215.64714.58331.85319.25645.03337.272
Self-Adaptive [17]20165.818.76738.8957.37545.751113.36315.44432.26420.12749.15640.486
Information-Flow [20]201710.828.001278.551310.641241.13922.561228.711242.891229.751249.89741.447
Background Matting [21]20202.714.92322.5026.91334.67610.69213.21227.25116.16240.04238.864
FBA Matting [22]20202.013.67119.9819.621024.32110.47112.41128.71215.11129.15128.291
Bayesian Matting [2]200113.635.931465.24129.288104.681527.611335.041646.411360.341588.4615121.9115
Robust Matting [12]20078.122.32932.7347.26347.271316.04818.721034.47724.96853.421142.408
Refine Edge [15]20137.014.41144.0896.30240.69818.361016.53634.93817.83453.431243.6010
Closed Form [7]20086.016.37441.2977.67535.14614.59516.40532.43517.37351.40847.7112
Learning Based [13]20095.717.29642.9987.88734.03414.44417.07834.37618.37546.02439.465
Nonlocal mattings [5]201114.629.0713110.761414.1015120.011629.821532.691459.911536.2214108.431696.8214
Shared Matting [4]20109.222.721059.59117.72645.921220.341122.351138.021129.561147.45538.194
Comprehensive Samplings [10]20088.822.781139.97613.831433.11315.61617.51937.09925.72953.331045.0411
KNN Matting [1]201213.736.1615115.081525.581644.091027.801431.611356.081434.801387.531475.3613
Spectral Mattings [6]201213.965.5016131.74164.781103.611439.031634.391563.211664.401687.4413122.5316
Sparse Samplings [14]20168.621.78844.85109.86934.07516.94916.55737.401025.841051.77942.989
Deep Matting [16]20164.616.79525.93213.121326.74215.64714.58331.85319.25645.03337.272
Self-Adaptive [17]20165.718.76738.8957.37445.751113.36315.44432.26420.12749.15640.486
Information-Flow [20]201710.828.001278.551310.641241.13922.561228.711242.891229.751249.89741.447
Background Matting [21]20203.515.57328.72310.581136.50711.33113.56227.23116.71239.75238.143
FBA Matting [22]20202.214.43223.68110.551024.98111.49212.93129.29215.85129.64129.221
Bayesian Matting [2]200113.81.85143.58120.121113.33140.74142.31161.47134.51158.011416.2315
Robust Matting [12]20079.50.89110.9440.0744.82130.2590.88100.7781.58123.89123.9212
Refine Edge [15]20138.00.5561.48100.0623.99100.34100.5490.7070.8063.80113.169
Closed Form [7]20085.70.4931.3580.0982.8840.1950.3560.6130.6142.7563.3810
Learning Based [13]20095.60.5051.4290.0992.8850.1840.3450.6860.6452.3742.434
Nonlocal mattings [5]201114.91.56138.17150.701520.97161.11151.88152.51162.351411.171613.3714
Shared Matting [4]201010.10.79102.73110.0974.50120.35111.00120.91111.43112.9482.948
Comprehensive Samplings [10]20088.80.7491.0650.19123.1570.2270.3980.86101.2493.65103.4011
KNN Matting [1]201213.22.28156.40141.22163.4780.61131.02131.73141.94136.42137.7313
Spectral Mattings [6]201214.57.031613.25160.08615.36152.09161.84142.45156.59168.711519.9716
Sparse Samplings [14]20167.90.6681.3070.11103.0960.2580.3870.8691.0482.9492.767
Deep Matting [16]20164.60.5040.4830.32141.1520.2260.2530.6350.5731.8431.843
Self-Adaptive [17]20165.60.5871.1560.0614.43110.1630.3340.6240.8772.9172.666
Information-Flow [20]201710.21.10123.94130.27133.9090.41120.97111.10121.38102.4652.615
Background Matting [21]20202.30.3820.3220.0852.1130.1020.2020.4310.3521.6221.822
FBA Matting [22]20201.30.2310.3010.0730.9210.0910.1510.5020.2610.7710.871
Bayesian Matting [2]200113.71.85143.58120.121013.33140.74142.31161.47134.51158.011416.2315
Robust Matting [12]20079.40.89110.9440.0734.82130.2590.88100.7781.58123.89123.9212
Refine Edge [15]20138.00.5561.48100.0623.99100.34100.5490.7070.8063.80113.169
Closed Form [7]20085.50.4931.3580.0962.8840.1950.3560.6130.6142.7563.3810
Learning Based [13]20095.40.5051.4290.0972.8850.1840.3450.6860.6452.3742.434
Nonlocal mattings [5]201114.91.56138.17150.701520.97161.11151.88152.51162.351411.171613.3714
Shared Matting [4]20109.90.79102.73110.0954.50120.35111.00120.91111.43112.9482.948
Comprehensive Samplings [10]20088.80.7491.0650.19123.1570.2270.3980.86101.2493.65103.4011
KNN Matting [1]201213.22.28156.40141.22163.4780.61131.02131.73141.94136.42137.7313
Spectral Mattings [6]201214.37.031613.25160.08415.36152.09161.84142.45156.59168.711519.9716
Sparse Samplings [14]20167.80.6681.3070.1193.0960.2580.3870.8691.0482.9492.767
Deep Matting [16]20164.50.5040.4830.32141.1520.2260.2530.6350.5731.8431.842
Self-Adaptive [17]20165.60.5871.1560.0614.43110.1630.3340.6240.8772.9172.666
Information-Flow [20]201710.21.10123.94130.27133.9090.41120.97111.10121.38102.4652.615
Background Matting [21]20203.00.3920.3720.16112.5230.1120.2020.4210.3521.5121.853
FBA Matting [22]20201.80.2810.3110.0980.9710.0910.1610.5120.2810.7810.921
Bayesian Matting [2]200113.61.85143.58120.12913.33140.74142.31161.47134.51158.011416.2315
Robust Matting [12]20079.40.89110.9440.0734.82130.2590.88100.7781.58123.89123.9212
Refine Edge [15]20138.00.5561.48100.0623.99100.34100.5490.7070.8063.80113.169
Closed Form [7]20085.40.4931.3580.0962.8830.1950.3560.6130.6142.7563.3810
Learning Based [13]20095.30.5051.4290.0972.8840.1840.3450.6860.6452.3742.434
Nonlocal mattings [5]201114.91.56138.17150.701520.97161.11151.88152.51162.351411.171613.3714
Shared Matting [4]20109.90.79102.73110.0954.50120.35111.00120.91111.43112.9482.948
Comprehensive Samplings [10]20088.70.7491.0650.19113.1570.2270.3980.86101.2493.65103.4011
KNN Matting [1]201213.22.28156.40141.22163.4780.61131.02131.73141.94136.42137.7313
Spectral Mattings [6]201214.37.031613.25160.08415.36152.09161.84142.45156.59168.711519.9716
Sparse Samplings [14]20167.70.6681.3070.1183.0960.2580.3870.8691.0482.9492.767
Deep Matting [16]20164.30.5040.4820.32131.1520.2260.2530.6350.5731.8431.842
Self-Adaptive [17]20165.60.5871.1560.0614.43110.1630.3340.6240.8772.9172.666
Information-Flow [20]201710.11.10123.94130.27123.9090.41120.97111.10121.38102.4652.615
Background Matting [21]20203.60.4720.6530.49142.9850.1220.2220.4310.4121.5622.023
FBA Matting [22]20202.00.3510.4310.14101.0510.1210.1910.5320.3410.8211.061
  1. city
  2. rain
  3. concert
  4. flowers
  5. snow
  6. Slava
  7. Vitaliy
  8. Artem
  9. juneau
  10. woods
  1. Source
  2. Trimap
  3. BM [2]
  4. RM [12]
  5. RE [15]
  6. CF [7]
  7. LB [13]
  8. NlM [5]
  9. ShM [4]
  10. CS [10]
  11. KNN [1]
  12. SpM [6]
  13. SpSM [14]
  14. DM [16]
  15. SAM [17]
  16. IFM [20]
  17. MWBE [21]
  18. FBAM [22]
0 %
 
Note: Make sure you are using the latest version of your web browser (we recommend to use chromium-based web browsers)

Integral Plots

Subjective comparison

We carried out subjective comparison of 13 matting methods using Subjectify.us platform. We applied matting methods to videos from our dataset and then uploaded videos containing extracted foreground objects and ground-truth sequences to Subjectify.us. The platform hired study participants and showed them these videos in pairwise fashion. For each pair, participants were asked to choose the video with better visual quality or indicate that they are approximately equal. Each study participant compared 30 pairs including 4 hidden quality-control comparisons between ground truth and a low-quality method; answers of 23 participants were rejected, since they failed at least one quality-control question. In total 10556 answers from 406 participants were collected. Bradley-Terry [18] and Crowd Bradley-Terry [19] models were used to convert pairwise comparisons to subjective ranks. The study report generated by the platform is shown below.

Multidimensional Analysis

References

[1]Qifeng Chen, Dingzeyu Li, and Chi-Ke­ung Tang. KNN mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 35(9):2175–2188, 2013. [ doi ,  pro­ject page ]
[2]Yung-Yu Chuang, Brian Cur­less, David H. Salesin, and Richard Szeliski. A bayesian ap­proach to dig­i­tal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), vol­ume 2, pages II-264–II-271, 2001. [ doi ,  pro­ject page ,  code ]
[3]Mikhail Ero­feev, Yury Git­man, Dmitriy Va­tolin, Alexey Fe­dorov, Jue Wang. Per­cep­tu­ally Mo­ti­vated Bench­mark for Video Mat­ting. British Ma­chine Vi­sion Con­fer­ence (BMVC), pages 99.1–99.12, 2015. [ doi ,  pdf ,  pro­ject page ]
[4]Ed­uardo S.L. Gastal and Manuel M. Oliveira. Shared sam­pling for real-time al­pha mat­ting. Com­puter Graph­ics Fo­rum, 29(2):575–584, 2010. [ pro­ject page ]
[5]Philip Lee and Ying Wu. Non­lo­cal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 2193–2200, 2011. [ code ]
[6]A. Levin, A. Rav Acha, and D. Lischin­ski. Spec­tral mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(10):1699–1712, 2008. [ doi ,  pro­ject page ]
[7]Anat Levin, Dani Lischin­ski, and Yair Weiss. A closed-form so­lu­tion to nat­ural im­age mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(2):228–242, 2008. [ doi ,  code ]
[8]Christoph Rhe­mann, Carsten Rother, Jue Wang, Margrit Gelautz, Push­meet Kohli, and Pamela Rott. A per­cep­tu­ally mo­ti­vated on­line bench­mark for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1826–1833, 2009. [ doi ]
[9]E. Shahrian and D. Ra­jan. Weighted color and tex­ture sam­ple se­lec­tion for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 718–725, 2012. [ doi ,  code ]
[10]E. Shahrian, D. Ra­jan, B. Price, and S. Co­hen. Im­prov­ing im­age mat­ting us­ing com­pre­hen­sive sam­pling sets. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 636–643, 2013. [ doi ,  code ]
[11]Karen Symonyan, Sergey Gr­ishin, Dmitriy Va­tolin, and Dmitriy Popov. Fast video su­per­res­o­lu­tion via clas­si­fi­ca­tion. In­ter­na­tional Con­fer­ence on Im­age Pro­cess­ing (ICIP), pages 349–352, 2008. [ doi ]
[12]Jue Wang and Michael F. Co­hen. Op­ti­mized color sam­pling for ro­bust mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1–8, 2007. [ doi ,  pro­ject page ]
[13]Yuan­jie Zheng and C. Kamb­hamettu. Learn­ing based dig­i­tal mat­ting. In In­ter­na­tional Con­fer­ence on Com­puter Vi­sion (ICCV), pages 889–896, 2009. [ doi ,  code ]
[14]Lev­ent Kara­can, Aykut Er­dem, Erkut Er­dem. Al­pha Mat­ting with KL-Di­ver­gence Based Sparse Sam­pling, IEEE Trans­ac­tions on Im­age Pro­cess­ing, 2017.
[15]http://​www.adobe.com/​en/​prod­ucts/​af­ter­ef­fects.html, Re­fine Edge tool in Adobe Af­ter Ef­fects CC.
[16]Ning Xu, Brian Price, Scott Co­hen, and Thomas Huang. Deep Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017
[17]Guangy­ing Cao, Jian­wei Li, Xi­aowu Chen, Zhiqiang He. Patch-based self-adap­tive mat­ting for high-res­o­lu­tion im­age and video, The Vi­sual Com­puter, 1-15, 2017.
[18]Ralph Al­lan Bradley and Mil­ton E Terry. Rank analy­sis of in­com­plete block de­signs: I. the method of paired com­par­isons. Bio­metrika, 39(3/​4):324–345, 1952.
[19]Chen, Xi, et al. Pair­wise rank­ing ag­gre­ga­tion in a crowd­sourced set­ting. Pro­ceed­ings of the sixth ACM in­ter­na­tional con­fer­ence on Web search and data min­ing. ACM, 2013.
[20]Ya­giz Ak­soy, Tunc Ozan Ay­din and Marc Polle­feys. De­sign­ing Ef­fec­tive In­ter-Pixel In­for­ma­tion Flow for Nat­ural Im­age Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017. [ doi ,  code ]
[21]Mat­ting with Back­ground Es­ti­ma­tion: A Novel Method for Ex­tract­ing Clean Fore­ground, IEEE Trans­ac­tion on Im­age Pro­cess­ing 2020 (anony­mous sub­mis­sion)
[22]F, B, Al­pha Mat­ting. Anony­mous ECCV 2020 sub­mis­sion #6826
Up